Generative AI for Ophthalmological Image SynthesisΒΆ

Project OverviewΒΆ

This project aims to develop a generative AI model for creating synthetic ophthalmological images based on the Brazilian Multilabel Ophthalmological Dataset (BRSET). My goal is to both classifiy and generate high quality, diverse images that could potentially be used for augmenting datasets, improving model training, and advancing research in ophthalmology. The data in this project is provided on Physionet, a platform created by MIT and Harvard Medical School for sharing biomedical data. This platform requires users to do HIPAA training and sign a data use agreement before accessing the data. Also in compliance with the data use agreement, the data will not be shared in this repository as well they require any models that are trained on the data to be shared with the community.

MethodologyΒΆ

My approach is inspired by the paper "Using generative AI to investigate medical imagery models and datasets" (Lang et al., 2024). I will implement a multistep process:

  1. Image Classification: Train a deep learning classifier on the BRSET dataset to predict various ophthalmological conditions.

  2. Generative Model: Develop a StyleGAN2 based generative model, incorporating guidance from my trained classifier.

  3. Attribute Discovery: Use the trained generator to identify and visualize key attributes that influence the classifier's predictions.

  4. Analysis and Interpretation: Examine the generated images and attributes to gain insights into the model's understanding of ophthalmological features.

Project GoalsΒΆ

  • Create a high-performance classifier for ophthalmological conditions using the BRSET dataset.
  • Implement a StyleGAN2 based generator capable of producing realistic eye images.
  • Discover and visualize attributes that are important for classifying various eye conditions.
  • Generate synthetic images that could potentially be used to augment existing datasets.

Ethical ConsiderationsΒΆ

While this project aims to advance medical imaging research, we must be mindful of the ethical implications of generating synthetic medical data. All generated images should be clearly labeled as synthetic and not used for diagnostic purposes without extensive validation.

Getting StartedΒΆ

This notebook will guide you through the implementation of each step in my methodology. Let's begin by setting up our environment and loading the BRSET dataset.

InΒ [Β ]:
# Connect google drive to colab for training on the dataset
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
InΒ [Β ]:
# Google Drive base directory
BASE_DIR = '/content/drive/MyDrive/'

# Local Base Directory
# BASE_DIR = './'
InΒ [Β ]:
# Standard libraries
import os
import random
import warnings
import numpy as np
from numpy.random import normal
import pandas as pd
from tqdm import tqdm

# Deep learning and image processing
import tensorflow as tf
from tensorflow import keras

# TensorFlow and Keras modules
from tensorflow.keras import layers, models, optimizers, losses, metrics, backend, applications
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
from tensorflow.keras.mixed_precision import global_policy
from tensorflow.keras.callbacks import LearningRateScheduler
from tensorflow.keras.utils import custom_object_scope
from tensorflow.keras.models import load_model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping


# Scikit-learn for data splitting and evaluation metrics
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix

# Stats libraries for statistical analysis
from scipy.stats import pearsonr

# Plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns

# Suppress warnings
warnings.filterwarnings('ignore')

# Set random seeds for reproducibility
SEED = 12
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)

# Ensure TensorFlow is using GPU
physical_devices = tf.config.list_physical_devices('GPU')
if len(physical_devices) > 0:
    tf.config.experimental.set_memory_growth(physical_devices[0], True)

# Enable eager execution
tf.compat.v1.enable_eager_execution()

# Display all outputs in a cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"

# Print environment information
print("Num GPUs Available:", len(tf.config.experimental.list_physical_devices('GPU')))
print("TensorFlow version:", tf.__version__)
print("Keras version:", tf.keras.__version__)
print("Eager execution enabled:", tf.executing_eagerly())
Num GPUs Available: 1
TensorFlow version: 2.17.0
Keras version: 3.4.1
Eager execution enabled: True

Load in the labels datasetΒΆ

InΒ [Β ]:
# Import the data labels

# Local Labels path
#label_path = '/Volumes/Extreme SSD/a-brazilian-multilabel-ophthalmological-dataset-brset-1.0.0/labels.csv'

# Google Drive Labels path
label_path = '/content/drive/MyDrive/a-brazilian-multilabel-ophthalmological-dataset-brset-1.0.0/labels.csv'

labels = pd.read_csv(label_path)

# Display the first few rows of the data labels
labels.head()
Out[Β ]:
image_id patient_id camera patient_age comorbidities diabetes_time_y insuline patient_sex exam_eye diabetes ... amd vascular_occlusion hypertensive_retinopathy drusens hemorrhage retinal_detachment myopic_fundus increased_cup_disc other quality
0 img00001 1 Canon CR 48.0 diabetes1 12 yes 1 1 yes ... 0 0 0 0 0 0 0 1 0 Adequate
1 img00002 1 Canon CR 48.0 diabetes1 12 yes 1 2 yes ... 0 0 0 0 0 0 0 1 0 Adequate
2 img00003 2 Canon CR 18.0 diabetes1 7 yes 2 1 yes ... 0 0 0 0 0 0 0 0 0 Adequate
3 img00004 2 Canon CR 18.0 diabetes1 7 yes 2 2 yes ... 0 0 0 0 0 0 0 0 0 Adequate
4 img00005 3 Canon CR 22.0 diabetes1 11 yes 1 1 yes ... 0 0 0 0 0 0 0 0 0 Adequate

5 rows Γ— 34 columns

Load in images and inspect the dataΒΆ

InΒ [Β ]:
# Define the path to the fundus photos

# Local image path
#IMAGE_PATH = '/Volumes/Extreme SSD/a-brazilian-multilabel-ophthalmological-dataset-brset-1.0.0/fundus_photos/'

# Google drive image path
IMAGE_PATH = '/content/drive/MyDrive/a-brazilian-multilabel-ophthalmological-dataset-brset-1.0.0/fundus_photos/'

def load_and_preprocess_image(image_id, target_size=(224, 224)):
    """
    Load and preprocess a fundus photo given its image_id.

    Args:
    image_id (str): The ID of the image to load.
    target_size (tuple): The target size to resize the image to.

    Returns:
    numpy.array: The preprocessed image as a numpy array.
    """
    # Construct the full path to the image
    image_path = os.path.join(IMAGE_PATH, f"{image_id}.jpg")

    # Load the image
    img = load_img(image_path, target_size=target_size)

    # Convert the image to a numpy array
    img_array = img_to_array(img)

    # Normalize the image
    img_array = img_array / 255.0

    return img_array

def load_batch_of_images(image_ids, batch_size=32):
    """
    Load and preprocess a batch of fundus photos.

    Args:
    image_ids (list): List of image IDs to load.
    batch_size (int): Number of images to load at once.

    Returns:
    numpy.array: A batch of preprocessed images.
    """
    images = []
    for i in range(0, len(image_ids), batch_size):
        batch_ids = image_ids[i:i+batch_size]
        batch_images = [load_and_preprocess_image(id) for id in batch_ids]
        images.extend(batch_images)
    return np.array(images)

# Load the first 100 images
first_100_image_ids = labels['image_id'].iloc[:100].tolist()
batch_of_images = load_batch_of_images(first_100_image_ids)

print(f"Batch of images shape: {batch_of_images.shape}")

# Display a grid of the first 16 images
plt.figure(figsize=(20, 20))
for i in range(16):
    plt.subplot(4, 4, i+1);
    plt.imshow(batch_of_images[i]);
    plt.axis('off');
    plt.title(f"Image ID: {first_100_image_ids[i]}");
plt.tight_layout();
plt.show();
Batch of images shape: (100, 224, 224, 3)
No description has been provided for this image

Above shown is 16 example images of the fundus images from the dataset. The dataset contains images of various eye conditions such as diabetic retinopathy, glaucoma, and macular degeneration. Each image is associated with multiple labels indicating the presence of different conditions. We will use this dataset to train our image classifier.

EDA and dataset preparation/cleaningΒΆ

Labels data EDAΒΆ

InΒ [Β ]:
labels.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16266 entries, 0 to 16265
Data columns (total 34 columns):
 #   Column                    Non-Null Count  Dtype  
---  ------                    --------------  -----  
 0   image_id                  16266 non-null  object 
 1   patient_id                16266 non-null  int64  
 2   camera                    16266 non-null  object 
 3   patient_age               10821 non-null  float64
 4   comorbidities             8030 non-null   object 
 5   diabetes_time_y           1910 non-null   object 
 6   insuline                  1714 non-null   object 
 7   patient_sex               16266 non-null  int64  
 8   exam_eye                  16266 non-null  int64  
 9   diabetes                  16266 non-null  object 
 10  nationality               16266 non-null  object 
 11  optic_disc                16266 non-null  object 
 12  vessels                   16266 non-null  int64  
 13  macula                    16266 non-null  int64  
 14  DR_SDRG                   16266 non-null  int64  
 15  DR_ICDR                   16266 non-null  int64  
 16  focus                     16266 non-null  int64  
 17  iluminaton                16266 non-null  int64  
 18  image_field               16266 non-null  int64  
 19  artifacts                 16266 non-null  int64  
 20  diabetic_retinopathy      16266 non-null  int64  
 21  macular_edema             16266 non-null  int64  
 22  scar                      16266 non-null  int64  
 23  nevus                     16266 non-null  int64  
 24  amd                       16266 non-null  int64  
 25  vascular_occlusion        16266 non-null  int64  
 26  hypertensive_retinopathy  16266 non-null  int64  
 27  drusens                   16266 non-null  int64  
 28  hemorrhage                16266 non-null  int64  
 29  retinal_detachment        16266 non-null  int64  
 30  myopic_fundus             16266 non-null  int64  
 31  increased_cup_disc        16266 non-null  int64  
 32  other                     16266 non-null  int64  
 33  quality                   16266 non-null  object 
dtypes: float64(1), int64(24), object(9)
memory usage: 4.2+ MB
InΒ [Β ]:
# List of binary diagnosis columns
binary_diagnoses = ['diabetic_retinopathy', 'macular_edema', 'scar', 'nevus', 'amd',
                    'vascular_occlusion', 'hypertensive_retinopathy', 'drusens',
                    'hemorrhage', 'retinal_detachment', 'myopic_fundus', 'increased_cup_disc']

# Calculate the counts and percentages in total patient population
diagnosis_counts = labels[binary_diagnoses].sum().sort_values(ascending=False)
total_patients = len(labels)
diagnosis_percentages = (diagnosis_counts / total_patients) * 100

plt.figure(figsize=(15, 8))
ax = diagnosis_counts.plot(kind='bar')

# Add count and percentage labels on top of each bar
for i, (count, percentage) in enumerate(zip(diagnosis_counts, diagnosis_percentages)):
    ax.text(i, count, f'N={count}\n({percentage:.1f}%)',
            ha='center', va='bottom')

plt.title('Distribution of Diagnoses')
plt.xlabel('Diagnosis (N = count), percentage of total patients (%)')
plt.ylabel('Count')
plt.yticks(range(0, 3001, 500))
plt.xticks(rotation=45, ha='right')
plt.tight_layout()
plt.show();
No description has been provided for this image

The highest diagnosis frequency in the dataset is increased cup disc ratio (CDR), followed by drusens, and diabetic retinopathy.

InΒ [Β ]:
# Correlation matrix of numerical columns and binary diagnoses
numerical_columns = ['patient_age', 'patient_sex', 'exam_eye', 'DR_SDRG', 'DR_ICDR',
                     'focus', 'iluminaton', 'image_field', 'artifacts']
correlation_matrix = labels[numerical_columns + binary_diagnoses].corr()
plt.figure(figsize=(20, 16))
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', linewidths=0.5, fmt='.2f')
plt.title('Correlation Matrix')
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Print out top correlations in the correlation matrix
def print_top_correlations(correlation_matrix, n=20):
    # Unstack the correlation matrix
    correlations = correlation_matrix.unstack()

    # Sort correlations in descending order of absolute value
    correlations = correlations.abs().sort_values(ascending=False)

    # Remove self correlations
    correlations = correlations[correlations != 1.0]

    
    seen_pairs = set()

    print(f"Top {n} Correlation Pairs:")
    count = 0
    for (var1, var2), correlation in correlations.items():
        pair = frozenset([var1, var2])

        if pair not in seen_pairs:
            print(f"{var1} - {var2}: {correlation_matrix.loc[var1, var2]:.4f}")
            seen_pairs.add(pair)
            count += 1

            if count == n:
                break

correlation_matrix = labels[numerical_columns + binary_diagnoses].corr()
print_top_correlations(correlation_matrix, n=10)
Top 10 Correlation Pairs:
DR_SDRG - DR_ICDR: 0.9848
DR_ICDR - diabetic_retinopathy: 0.9173
DR_SDRG - diabetic_retinopathy: 0.9093
diabetic_retinopathy - macular_edema: 0.5572
macular_edema - DR_SDRG: 0.5328
DR_ICDR - macular_edema: 0.5269
drusens - patient_age: 0.2560
vascular_occlusion - hemorrhage: 0.1659
amd - patient_age: 0.1496
increased_cup_disc - patient_age: 0.1167

The correlation analysis of the Brazilian Retinal Image Dataset (BRSET) reveals several interesting relationships between various ophthalmological parameters and diagnoses.

  1. DR_SDRG - DR_ICDR (0.9853): This extremely high correlation is expected as both are classification systems for diabetic retinopathy (DR). The Scottish Diabetic Retinopathy Grading Scheme (SDRG) and the International Clinical Diabetic Retinopathy (ICDR) scale are closely aligned in their assessment of DR severity.

  2. DR_ICDR - diabetic_retinopathy (0.9173) and diabetic_retinopathy - DR_SDRG (0.9103): These strong correlations indicate that both grading systems (ICDR and SDRG) are highly predictive of the presence of diabetic retinopathy. This validates the consistency between the binary classification (presence/absence) and the more detailed grading scales.

  3. diabetic_retinopathy - macular_edema (0.5611): This moderate positive correlation suggests that patients with diabetic retinopathy are more likely to also have macular edema. This is clinically significant as macular edema is a common complication of diabetic retinopathy.

  4. macular_edema - DR_SDRG (0.5406) and macular_edema - DR_ICDR (0.5337): These correlations further support the relationship between the severity of diabetic retinopathy (as measured by both scales) and the presence of macular edema. As the severity of DR increases, the likelihood of macular edema also increases.

  5. patient_age - drusens (0.2179): This weak positive correlation suggests that the presence of drusens (small yellow or white accumulations of extracellular material in the retina) is more common in older patients. This aligns with clinical knowledge, as drusens are often associated with age-related macular degeneration (AMD).

  6. vascular_occlusion - hemorrhage (0.1816): This weak positive correlation indicates a relationship between vascular occlusions and hemorrhages in the retina. This makes clinical sense, as occlusions can lead to bleeding in the affected blood vessels.

  7. patient_age - amd (0.1284): The weak positive correlation between age and age-related macular degeneration (AMD) is expected, as AMD is more prevalent in older populations.

  8. drusens - DR_SDRG (-0.0976): This very weak negative correlation might suggest a slight inverse relationship between the presence of drusens and the severity of diabetic retinopathy. However, given the low correlation coefficient, this relationship is likely not clinically significant and would require further investigation to determine if it's meaningful.

These correlations provide valuable insights into the relationships between various ophthalmological conditions and patient characteristics in the BRSET dataset. They highlight the interconnected nature of diabetic retinopathy, macular edema, and age-related eye conditions. These findings can inform feature selection for machine learning models and guide further clinical research into the progression and comorbidities of retinal diseases.

InΒ [Β ]:
# Distribution of camera types
plt.figure(figsize=(10, 6))
labels['camera'].value_counts().plot(kind='bar')
plt.title('Distribution of Camera Types')
plt.xlabel('Camera')
plt.ylabel('Count')
plt.xticks(rotation=45, ha='right')
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Distribution of image quality
plt.figure(figsize=(10, 6))
labels['quality'].value_counts().plot(kind='bar')
plt.title('Distribution of Image Quality')
plt.xlabel('Quality')
plt.ylabel('Count')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show();
No description has been provided for this image

Based on the plot above we can see there is a subset of images labeled as inadequate. We will remove these images from the dataset as they are not useful for training our classifier.

InΒ [Β ]:
# Shape before dropping inadequate images
labels.shape
Out[Β ]:
(16266, 34)
InΒ [Β ]:
# Drop the inadequate quality images
labels = labels[labels['quality'] != 'Inadequate']
InΒ [Β ]:
# Shape after dropping inadequate quality images
labels.shape
Out[Β ]:
(14279, 34)
InΒ [Β ]:
# Distribution of DR severity (DR_ICDR)
plt.figure(figsize=(10, 6))
labels['DR_ICDR'].value_counts().sort_index().plot(kind='bar')
plt.title('Distribution of Diabetic Retinopathy Severity (ICDR scale)')
plt.xlabel('Severity')
plt.ylabel('Count')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Age distribution
plt.figure(figsize=(10, 6))
sns.histplot(data=labels, x='patient_age', kde=True)
plt.title('Distribution of Patient Age')
plt.xlabel('Age')
plt.ylabel('Count')
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Diabetes duration distribution
plt.figure(figsize=(10, 6))
labels['diabetes_time_y'] = pd.to_numeric(labels['diabetes_time_y'], errors='coerce')
sns.histplot(data=labels, x='diabetes_time_y', kde=True)
plt.title('Distribution of Diabetes Duration')
plt.xlabel('Years')
plt.ylabel('Count')
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Relationship between age and diabetes duration
plt.figure(figsize=(10, 6))
sns.scatterplot(data=labels, x='patient_age', y='diabetes_time_y')
plt.title('Relationship between Patient Age and Diabetes Duration')
plt.xlabel('Patient Age')
plt.ylabel('Diabetes Duration (years)')
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Distribution of patient sex
plt.figure(figsize=(8, 6))
labels['patient_sex'].map({1: 'Male', 2: 'Female'}).value_counts().plot(kind='bar')
plt.title('Distribution of Patient Sex')
plt.xlabel('Sex')
plt.ylabel('Count')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Distribution of examined eye
plt.figure(figsize=(8, 6))
labels['exam_eye'].map({1: 'Right', 2: 'Left'}).value_counts().plot(kind='bar')
plt.title('Distribution of Examined Eye')
plt.xlabel('Eye')
plt.ylabel('Count')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
print("Summary Statistics:")
labels[numerical_columns + binary_diagnoses].describe()
Summary Statistics:
Out[Β ]:
patient_age patient_sex exam_eye DR_SDRG DR_ICDR focus iluminaton image_field artifacts diabetic_retinopathy ... scar nevus amd vascular_occlusion hypertensive_retinopathy drusens hemorrhage retinal_detachment myopic_fundus increased_cup_disc
count 9858.000000 14279.000000 14279.000000 14279.000000 14279.000000 14279.0 14279.0 14279.0 14279.0 14279.000000 ... 14279.000000 14279.000000 14279.000000 14279.000000 14279.000000 14279.000000 14279.000000 14279.000000 14279.000000 14279.000000
mean 57.460641 1.618111 1.505778 0.187968 0.187058 1.0 1.0 1.0 1.0 0.066601 ... 0.018699 0.008894 0.022481 0.006513 0.017578 0.178864 0.006233 0.000490 0.016178 0.198893
std 18.207040 0.485867 0.499984 0.764434 0.753912 0.0 0.0 0.0 0.0 0.249339 ... 0.135464 0.093892 0.148246 0.080443 0.131417 0.383252 0.078705 0.022136 0.126163 0.399182
min 5.000000 1.000000 1.000000 0.000000 0.000000 1.0 1.0 1.0 1.0 0.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
25% 47.000000 1.000000 1.000000 0.000000 0.000000 1.0 1.0 1.0 1.0 0.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
50% 60.000000 2.000000 2.000000 0.000000 0.000000 1.0 1.0 1.0 1.0 0.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
75% 71.000000 2.000000 2.000000 0.000000 0.000000 1.0 1.0 1.0 1.0 0.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
max 97.000000 2.000000 2.000000 4.000000 4.000000 1.0 1.0 1.0 1.0 1.000000 ... 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000

8 rows Γ— 21 columns

InΒ [Β ]:
print("\nMissing Values:")
print(labels.isnull().sum())
Missing Values:
image_id                        0
patient_id                      0
camera                          0
patient_age                  4421
comorbidities                6928
diabetes_time_y             12548
insuline                    12685
patient_sex                     0
exam_eye                        0
diabetes                        0
nationality                     0
optic_disc                      0
vessels                         0
macula                          0
DR_SDRG                         0
DR_ICDR                         0
focus                           0
iluminaton                      0
image_field                     0
artifacts                       0
diabetic_retinopathy            0
macular_edema                   0
scar                            0
nevus                           0
amd                             0
vascular_occlusion              0
hypertensive_retinopathy        0
drusens                         0
hemorrhage                      0
retinal_detachment              0
myopic_fundus                   0
increased_cup_disc              0
other                           0
quality                         0
dtype: int64

As seen above there are a lot of missing values in the diabetes_time_y and the insulin columns. Also comorbidity column has a lot of missing values that should be inspected. Patient age has quite a few missing values but we will impute them with the median.

The diabetes_time_y and insulin columns have a lot of missing values so we will drop them.

InΒ [Β ]:
# Impute the missing ages with the median age
median_age = labels['patient_age'].median()
labels['patient_age'] = labels['patient_age'].fillna(median_age)

# Drop the diabetes_time_y column and insulin column
labels = labels.drop(columns=['diabetes_time_y', 'insuline'])
InΒ [Β ]:
labels['comorbidities'].value_counts()
Out[Β ]:
count
comorbidities
0 2546
SAH 1574
diabetes, SAH 943
diabetes 777
diabetes1 378
... ...
SAH, chagas 1
herpetic encephalitis 1
hypothyroidism, hypophysis adenoma 1
syphilis 1
SAH, arthritis 1

208 rows Γ— 1 columns


Based on the comobrbidiy column, it looks like the large values are not meaningful so we will drop the column.

InΒ [Β ]:
# Drop the comorbidities column
labels = labels.drop(columns=['comorbidities'])
InΒ [Β ]:
labels.isna().sum()
Out[Β ]:
0
image_id 0
patient_id 0
camera 0
patient_age 0
patient_sex 0
exam_eye 0
diabetes 0
nationality 0
optic_disc 0
vessels 0
macula 0
DR_SDRG 0
DR_ICDR 0
focus 0
iluminaton 0
image_field 0
artifacts 0
diabetic_retinopathy 0
macular_edema 0
scar 0
nevus 0
amd 0
vascular_occlusion 0
hypertensive_retinopathy 0
drusens 0
hemorrhage 0
retinal_detachment 0
myopic_fundus 0
increased_cup_disc 0
other 0
quality 0

InΒ [Β ]:
labels.shape
Out[Β ]:
(14279, 31)

Images EDAΒΆ

This section will explore properties of the images in the dataset, such as their dimensions, color channels, and visual characteristics. This will help understand the nature of the data and guide preprocessing steps for training our image classifier.

InΒ [Β ]:
# Calculate mean, std, min, and max pixel values across initial 100 images
mean_pixel_value = np.mean(batch_of_images)
std_pixel_value = np.std(batch_of_images)
min_pixel_value = np.min(batch_of_images)
max_pixel_value = np.max(batch_of_images)

print(f"Mean pixel value: {mean_pixel_value:.4f}")
print(f"Std dev of pixel values: {std_pixel_value:.4f}")
print(f"Min pixel value: {min_pixel_value:.4f}")
print(f"Max pixel value: {max_pixel_value:.4f}")
Mean pixel value: 0.2096
Std dev of pixel values: 0.2001
Min pixel value: 0.0000
Max pixel value: 1.0000
InΒ [Β ]:
# Plot histogram of pixel intensities
plt.figure(figsize=(10, 6))
plt.hist(batch_of_images.ravel(), bins=50, range=(0, 1))
plt.title("Histogram of Pixel Intensities")
plt.xlabel("Pixel Intensity")
plt.ylabel("Frequency")
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Separate color channels
red_channel = batch_of_images[:, :, :, 0]
green_channel = batch_of_images[:, :, :, 1]
blue_channel = batch_of_images[:, :, :, 2]

# Plot histograms for each channel
plt.figure(figsize=(15, 5))

plt.subplot(131)
plt.hist(red_channel.ravel(), bins=50, color='red', alpha=0.7)
plt.title("Red Channel")

plt.subplot(132)
plt.hist(green_channel.ravel(), bins=50, color='green', alpha=0.7)
plt.title("Green Channel")

plt.subplot(133)
plt.hist(blue_channel.ravel(), bins=50, color='blue', alpha=0.7)
plt.title("Blue Channel")

plt.tight_layout()
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Image channel correlations
r_g_corr = pearsonr(red_channel.ravel(), green_channel.ravel())[0]
r_b_corr = pearsonr(red_channel.ravel(), blue_channel.ravel())[0]
g_b_corr = pearsonr(green_channel.ravel(), blue_channel.ravel())[0]

print(f"Correlation between Red and Green channels: {r_g_corr:.4f}")
print(f"Correlation between Red and Blue channels: {r_b_corr:.4f}")
print(f"Correlation between Green and Blue channels: {g_b_corr:.4f}")
Correlation between Red and Green channels: 0.9373
Correlation between Red and Blue channels: 0.8791
Correlation between Green and Blue channels: 0.9503
InΒ [Β ]:
# Calculate brightness
brightness = np.mean(batch_of_images, axis=3)

plt.figure(figsize=(10, 6))
plt.hist(brightness.ravel(), bins=50)
plt.title("Histogram of Image Brightness")
plt.xlabel("Brightness")
plt.ylabel("Frequency")
plt.show();
No description has been provided for this image
InΒ [Β ]:
# Calculate contrast
contrast = np.std(batch_of_images, axis=3)

plt.figure(figsize=(10, 6))
plt.hist(contrast.ravel(), bins=50)
plt.title("Histogram of Image Contrast")
plt.xlabel("Contrast")
plt.ylabel("Frequency")
plt.show();
No description has been provided for this image

Model BuildingΒΆ

The model will focus on predicting diabetic retinopathy. Diabetic retinopathy is a common complication of diabetes and a leading cause of blindness in adults. Early detection and treatment are crucial for preventing vision loss. The model will be trained on the BRSET dataset, which contains retinal fundus images labeled with various ophthalmological conditions, including diabetic retinopathy.

Data PipeineΒΆ

InΒ [Β ]:
# This is the condition of interest for classification, this condition had the 3rd highest number of cases in the dataset
CLASSIFIER = 'diabetic_retinopathy'
InΒ [Β ]:
labels[CLASSIFIER].value_counts()
Out[Β ]:
count
diabetic_retinopathy
0 13328
1 951

We will sample randomly the same number of images as there are positive cases of diabetic retinopathy. This will help us balance the dataset. We will only sample images that have a diagnosis of diabetic retinopathy and images that have no other diagnosis for the other conditions in this dataset.

InΒ [Β ]:
# Get all rows where dr == 1
classifier_positive = labels[labels[CLASSIFIER] == 1]

# Get the count of positive cases
condition_positive = len(classifier_positive)

# Create a mask for normal images (all 0 in the binary diagnosis columns)
normal_mask = (
    (labels['diabetic_retinopathy'] == 0) &
    (labels['macular_edema'] == 0) &
    (labels['scar'] == 0) &
    (labels['nevus'] == 0) &
    (labels['amd'] == 0) &
    (labels['vascular_occlusion'] == 0) &
    (labels['hypertensive_retinopathy'] == 0) &
    (labels['drusens'] == 0) &
    (labels['hemorrhage'] == 0) &
    (labels['retinal_detachment'] == 0) &
    (labels['myopic_fundus'] == 0) &
    (labels['increased_cup_disc'] == 0) &
    (labels['other'] == 0)
)

# Get all normal images
normal_images = labels[normal_mask]

# Randomly sample the same number of normal images as dr positive cases
sampled_normal = normal_images.sample(n=condition_positive, random_state=12)

# Combine drusens positive cases and sampled normal images
subset_df = pd.concat([classifier_positive, sampled_normal])

# Shuffle the combined dataframe
subset_df = subset_df.sample(frac=1, random_state=12).reset_index(drop=True)

print(f"Total number of normal (no diagnosis) images {len(normal_images)}")
print(f"Sampled normal: {len(sampled_normal)}")
print(f"{CLASSIFIER} positive: {len(classifier_positive)}")
print(f"Total samples in combined subset: {len(subset_df)}")
Total number of normal (no diagnosis) images 7291
Sampled normal: 951
diabetic_retinopathy positive: 951
Total samples in combined subset: 1902
InΒ [Β ]:
subset_df.head()
Out[Β ]:
image_id patient_id camera patient_age patient_sex exam_eye diabetes nationality optic_disc vessels ... amd vascular_occlusion hypertensive_retinopathy drusens hemorrhage retinal_detachment myopic_fundus increased_cup_disc other quality
0 img03149 1670 Canon CR 74.0 2 2 yes Brazil 1 1 ... 0 0 0 0 0 0 0 0 0 Adequate
1 img15335 8042 Canon CR 67.0 2 1 yes Brazil 1 1 ... 0 0 0 0 0 0 0 0 0 Adequate
2 img14601 7663 NIKON NF5050 60.0 2 1 no Brazil 1 1 ... 0 0 0 0 0 0 0 0 0 Adequate
3 img13790 7253 Canon CR 53.0 1 1 no Brazil 1 1 ... 0 0 0 0 0 0 0 0 0 Adequate
4 img01212 631 Canon CR 30.0 2 1 no Brazil 1 1 ... 0 0 0 0 0 0 0 0 0 Adequate

5 rows Γ— 31 columns

InΒ [Β ]:
subset_df = subset_df.drop(columns = ['camera', 'nationality', 'other', 'quality', 'patient_id'], errors = 'ignore')
InΒ [Β ]:
subset_df.columns
Out[Β ]:
Index(['image_id', 'patient_age', 'patient_sex', 'exam_eye', 'diabetes',
       'optic_disc', 'vessels', 'macula', 'DR_SDRG', 'DR_ICDR', 'focus',
       'iluminaton', 'image_field', 'artifacts', 'diabetic_retinopathy',
       'macular_edema', 'scar', 'nevus', 'amd', 'vascular_occlusion',
       'hypertensive_retinopathy', 'drusens', 'hemorrhage',
       'retinal_detachment', 'myopic_fundus', 'increased_cup_disc'],
      dtype='object')
InΒ [Β ]:
# Counts of the balanced dataset for the classifier
subset_df[CLASSIFIER].value_counts()
Out[Β ]:
count
diabetic_retinopathy
1 951
0 951

InΒ [Β ]:
subset_df[CLASSIFIER].value_counts(normalize=True)
Out[Β ]:
proportion
diabetic_retinopathy
1 0.5
0 0.5

InΒ [Β ]:
# Code for creating the dataset for tensorflow including image augmentation
binary_diagnoses = [CLASSIFIER]

IMAGE_SIZE = (224, 224)

def parse_image(filename, labels):
    image = tf.io.read_file(filename)
    image = tf.image.decode_jpeg(image, channels=3)
    image = tf.image.resize(image, IMAGE_SIZE)  # Resize to 224x224
    image = tf.cast(image, tf.float32) / 255.0  # Normalize 0-1
    return image, labels

def augment_image(image, labels):
    image = tf.image.random_flip_left_right(image)
    image = tf.image.random_brightness(image, max_delta=0.1)
    image = tf.image.random_contrast(image, lower=0.9, upper=1.1)
    return image, labels

def create_dataset(labels_df, batch_size=32, shuffle=True, augment=False):

    # Match labels with image paths
    filenames = labels_df['image_id'].apply(lambda x: os.path.join(IMAGE_PATH, x + '.jpg')).tolist()

    # Get labels
    labels = labels_df[binary_diagnoses].values.astype(np.float32).tolist()

    dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))

    # Parse images and labels
    dataset = dataset.map(parse_image, num_parallel_calls=tf.data.AUTOTUNE)

    if augment:
        dataset = dataset.map(augment_image, num_parallel_calls=tf.data.AUTOTUNE)

    if shuffle:
        dataset = dataset.shuffle(buffer_size=1000)

    dataset = dataset.batch(batch_size)
    dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)

    return dataset
InΒ [Β ]:
# Split the dataset into training and validation sets based on the labels dataframe
train_df, val_df = train_test_split(subset_df, test_size=0.2, random_state=12)

BATCH_SIZE = 16

# Create datasets
train_dataset = create_dataset(train_df, batch_size=BATCH_SIZE, shuffle=True, augment=True)
val_dataset = create_dataset(val_df, batch_size=BATCH_SIZE, shuffle=False, augment=False)

# Inspect the number of batches in the training and validation datasets
print(f"\nNumber of batches in training dataset: {tf.data.experimental.cardinality(train_dataset)}")
print(f"Number of batches in validation dataset: {tf.data.experimental.cardinality(val_dataset)}")

# Inspect the first batch of the training dataset
for images, labels_batch in train_dataset.take(1):
    print(f"\nShape of the image batch: {images.shape}")
    print(f"Shape of the labels batch: {labels_batch.shape}")
    print(f"Sample labels from the first image: {labels_batch[0]}")
Number of batches in training dataset: 96
Number of batches in validation dataset: 24

Shape of the image batch: (16, 224, 224, 3)
Shape of the labels batch: (16, 1)
Sample labels from the first image: [1.]
InΒ [Β ]:
# Inspect the first batch of the training dataset
for images, labels_batch in train_dataset.take(1):
    print(f"\nShape of the image batch: {images.shape}")
    print(f"Shape of the labels batch: {labels_batch.shape}")
    print("\nLabels for each image in the batch:")
    for i, labels in enumerate(labels_batch):
        print(f"Image {i+1}: {labels.numpy()}")

# Print unique label combinations
print("\nUnique label combinations in the batch:")
unique_labels = np.unique(labels_batch.numpy(), axis=0)
for label_combo in unique_labels:
    print(label_combo)

# Count of each label
print("\nCount of each label in the batch:")
label_counts = np.sum(labels_batch.numpy(), axis=0)
for i, count in enumerate(label_counts):
    print(f"{binary_diagnoses[i]}: {count}")
Shape of the image batch: (16, 224, 224, 3)
Shape of the labels batch: (16, 1)

Labels for each image in the batch:
Image 1: [1.]
Image 2: [1.]
Image 3: [1.]
Image 4: [0.]
Image 5: [0.]
Image 6: [1.]
Image 7: [0.]
Image 8: [0.]
Image 9: [1.]
Image 10: [0.]
Image 11: [1.]
Image 12: [0.]
Image 13: [0.]
Image 14: [1.]
Image 15: [1.]
Image 16: [1.]

Unique label combinations in the batch:
[0.]
[1.]

Count of each label in the batch:
diabetic_retinopathy: 9.0

Image classifierΒΆ

First we will start with a simple classifier, then the second classifier will be a vgg16 model.

Simple Classifier

The simple classifier is a basic CNN model with 4 convolutional layers followed by 2 fully connected layers. The model is trained on the BRSET dataset to predict the presence of diabetic retinopathy in retinal fundus images. The classifier is trained for 200 epochs and the learning rate is decreased based on the validation loss.

InΒ [Β ]:
classification_model = models.Sequential([
    layers.Conv2D(32, 3, activation='relu', input_shape=(224, 224, 3)),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(128, 3, activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(128, 3, activation='relu'),
    layers.GlobalAveragePooling2D(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(64, activation='relu'),
    layers.Dense(1, activation='sigmoid')
])

# Optimizer for model
optimizer = Adam(learning_rate=0.0001)
classification_model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])


# Define callbacks

# Learning rate scheduler
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=8, min_lr=0.00000001)

# Early stopping
early_stopping = EarlyStopping(
    monitor='val_loss',
    patience=15,
    restore_best_weights=True,
    verbose=1
)

# Combine all callbacks
callbacks = [
    reduce_lr,

]

# Train the model with all callbacks
history = classification_model.fit(
    train_dataset,
    validation_data=val_dataset,
    epochs=200,
    callbacks=callbacks
)

classification_model.save(f'{BASE_DIR}classification_model.keras')
Epoch 1/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 87s 766ms/step - accuracy: 0.5191 - loss: 0.6929 - val_accuracy: 0.6273 - val_loss: 0.6912 - learning_rate: 1.0000e-04
Epoch 2/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.5070 - loss: 0.6919 - val_accuracy: 0.5984 - val_loss: 0.6826 - learning_rate: 1.0000e-04
Epoch 3/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.5330 - loss: 0.6887 - val_accuracy: 0.5669 - val_loss: 0.6797 - learning_rate: 1.0000e-04
Epoch 4/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.5701 - loss: 0.6808 - val_accuracy: 0.6220 - val_loss: 0.6617 - learning_rate: 1.0000e-04
Epoch 5/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.5932 - loss: 0.6738 - val_accuracy: 0.5774 - val_loss: 0.6748 - learning_rate: 1.0000e-04
Epoch 6/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.5882 - loss: 0.6738 - val_accuracy: 0.6142 - val_loss: 0.6632 - learning_rate: 1.0000e-04
Epoch 7/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.5709 - loss: 0.6807 - val_accuracy: 0.6220 - val_loss: 0.6532 - learning_rate: 1.0000e-04
Epoch 8/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.5866 - loss: 0.6791 - val_accuracy: 0.5722 - val_loss: 0.6816 - learning_rate: 1.0000e-04
Epoch 9/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.5894 - loss: 0.6759 - val_accuracy: 0.6168 - val_loss: 0.6560 - learning_rate: 1.0000e-04
Epoch 10/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.5906 - loss: 0.6736 - val_accuracy: 0.5827 - val_loss: 0.6750 - learning_rate: 1.0000e-04
Epoch 11/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.5983 - loss: 0.6692 - val_accuracy: 0.6010 - val_loss: 0.6576 - learning_rate: 1.0000e-04
Epoch 12/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.6008 - loss: 0.6687 - val_accuracy: 0.5696 - val_loss: 0.6878 - learning_rate: 1.0000e-04
Epoch 13/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.5744 - loss: 0.6864 - val_accuracy: 0.6247 - val_loss: 0.6556 - learning_rate: 1.0000e-04
Epoch 14/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6035 - loss: 0.6643 - val_accuracy: 0.6194 - val_loss: 0.6556 - learning_rate: 1.0000e-04
Epoch 15/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.5783 - loss: 0.6773 - val_accuracy: 0.6010 - val_loss: 0.6593 - learning_rate: 1.0000e-04
Epoch 16/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6273 - loss: 0.6529 - val_accuracy: 0.6142 - val_loss: 0.6551 - learning_rate: 2.0000e-05
Epoch 17/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6263 - loss: 0.6529 - val_accuracy: 0.6273 - val_loss: 0.6506 - learning_rate: 2.0000e-05
Epoch 18/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6347 - loss: 0.6471 - val_accuracy: 0.6247 - val_loss: 0.6510 - learning_rate: 2.0000e-05
Epoch 19/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6300 - loss: 0.6492 - val_accuracy: 0.6168 - val_loss: 0.6515 - learning_rate: 2.0000e-05
Epoch 20/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6117 - loss: 0.6605 - val_accuracy: 0.6220 - val_loss: 0.6501 - learning_rate: 2.0000e-05
Epoch 21/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6309 - loss: 0.6533 - val_accuracy: 0.6168 - val_loss: 0.6558 - learning_rate: 2.0000e-05
Epoch 22/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6263 - loss: 0.6452 - val_accuracy: 0.6168 - val_loss: 0.6534 - learning_rate: 2.0000e-05
Epoch 23/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6616 - loss: 0.6408 - val_accuracy: 0.6089 - val_loss: 0.6590 - learning_rate: 2.0000e-05
Epoch 24/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6306 - loss: 0.6516 - val_accuracy: 0.6247 - val_loss: 0.6476 - learning_rate: 2.0000e-05
Epoch 25/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6383 - loss: 0.6498 - val_accuracy: 0.6273 - val_loss: 0.6566 - learning_rate: 2.0000e-05
Epoch 26/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6207 - loss: 0.6561 - val_accuracy: 0.6220 - val_loss: 0.6467 - learning_rate: 2.0000e-05
Epoch 27/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6362 - loss: 0.6447 - val_accuracy: 0.6220 - val_loss: 0.6514 - learning_rate: 2.0000e-05
Epoch 28/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6320 - loss: 0.6572 - val_accuracy: 0.6299 - val_loss: 0.6547 - learning_rate: 2.0000e-05
Epoch 29/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6262 - loss: 0.6524 - val_accuracy: 0.6273 - val_loss: 0.6461 - learning_rate: 2.0000e-05
Epoch 30/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6422 - loss: 0.6386 - val_accuracy: 0.6247 - val_loss: 0.6444 - learning_rate: 2.0000e-05
Epoch 31/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.5974 - loss: 0.6565 - val_accuracy: 0.6220 - val_loss: 0.6542 - learning_rate: 2.0000e-05
Epoch 32/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6403 - loss: 0.6499 - val_accuracy: 0.6194 - val_loss: 0.6495 - learning_rate: 2.0000e-05
Epoch 33/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6191 - loss: 0.6540 - val_accuracy: 0.6325 - val_loss: 0.6451 - learning_rate: 2.0000e-05
Epoch 34/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6446 - loss: 0.6308 - val_accuracy: 0.6273 - val_loss: 0.6472 - learning_rate: 2.0000e-05
Epoch 35/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6231 - loss: 0.6511 - val_accuracy: 0.6247 - val_loss: 0.6503 - learning_rate: 2.0000e-05
Epoch 36/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6100 - loss: 0.6586 - val_accuracy: 0.6299 - val_loss: 0.6474 - learning_rate: 2.0000e-05
Epoch 37/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6200 - loss: 0.6496 - val_accuracy: 0.6352 - val_loss: 0.6427 - learning_rate: 2.0000e-05
Epoch 38/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6559 - loss: 0.6366 - val_accuracy: 0.6220 - val_loss: 0.6463 - learning_rate: 2.0000e-05
Epoch 39/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6486 - loss: 0.6405 - val_accuracy: 0.6325 - val_loss: 0.6430 - learning_rate: 2.0000e-05
Epoch 40/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6555 - loss: 0.6306 - val_accuracy: 0.6220 - val_loss: 0.6481 - learning_rate: 2.0000e-05
Epoch 41/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6341 - loss: 0.6452 - val_accuracy: 0.6299 - val_loss: 0.6428 - learning_rate: 2.0000e-05
Epoch 42/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6542 - loss: 0.6356 - val_accuracy: 0.6247 - val_loss: 0.6474 - learning_rate: 2.0000e-05
Epoch 43/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6251 - loss: 0.6467 - val_accuracy: 0.6273 - val_loss: 0.6457 - learning_rate: 2.0000e-05
Epoch 44/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.6385 - loss: 0.6473 - val_accuracy: 0.6299 - val_loss: 0.6410 - learning_rate: 2.0000e-05
Epoch 45/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.6442 - loss: 0.6358 - val_accuracy: 0.6194 - val_loss: 0.6476 - learning_rate: 2.0000e-05
Epoch 46/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6510 - loss: 0.6368 - val_accuracy: 0.6483 - val_loss: 0.6381 - learning_rate: 2.0000e-05
Epoch 47/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6528 - loss: 0.6341 - val_accuracy: 0.6535 - val_loss: 0.6373 - learning_rate: 2.0000e-05
Epoch 48/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6481 - loss: 0.6431 - val_accuracy: 0.6273 - val_loss: 0.6440 - learning_rate: 2.0000e-05
Epoch 49/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6556 - loss: 0.6263 - val_accuracy: 0.6194 - val_loss: 0.6468 - learning_rate: 2.0000e-05
Epoch 50/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6605 - loss: 0.6311 - val_accuracy: 0.6273 - val_loss: 0.6460 - learning_rate: 2.0000e-05
Epoch 51/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6501 - loss: 0.6381 - val_accuracy: 0.6299 - val_loss: 0.6410 - learning_rate: 2.0000e-05
Epoch 52/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6478 - loss: 0.6246 - val_accuracy: 0.6089 - val_loss: 0.6517 - learning_rate: 2.0000e-05
Epoch 53/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6551 - loss: 0.6323 - val_accuracy: 0.6483 - val_loss: 0.6369 - learning_rate: 2.0000e-05
Epoch 54/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6484 - loss: 0.6344 - val_accuracy: 0.6247 - val_loss: 0.6462 - learning_rate: 2.0000e-05
Epoch 55/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6596 - loss: 0.6423 - val_accuracy: 0.6509 - val_loss: 0.6371 - learning_rate: 2.0000e-05
Epoch 56/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6385 - loss: 0.6457 - val_accuracy: 0.6142 - val_loss: 0.6536 - learning_rate: 2.0000e-05
Epoch 57/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6489 - loss: 0.6386 - val_accuracy: 0.6299 - val_loss: 0.6427 - learning_rate: 2.0000e-05
Epoch 58/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6506 - loss: 0.6412 - val_accuracy: 0.6299 - val_loss: 0.6476 - learning_rate: 2.0000e-05
Epoch 59/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6732 - loss: 0.6304 - val_accuracy: 0.6509 - val_loss: 0.6367 - learning_rate: 2.0000e-05
Epoch 60/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6563 - loss: 0.6172 - val_accuracy: 0.6404 - val_loss: 0.6371 - learning_rate: 2.0000e-05
Epoch 61/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6301 - loss: 0.6452 - val_accuracy: 0.6562 - val_loss: 0.6353 - learning_rate: 2.0000e-05
Epoch 62/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6528 - loss: 0.6315 - val_accuracy: 0.6430 - val_loss: 0.6383 - learning_rate: 2.0000e-05
Epoch 63/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6783 - loss: 0.6232 - val_accuracy: 0.6299 - val_loss: 0.6475 - learning_rate: 2.0000e-05
Epoch 64/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6691 - loss: 0.6172 - val_accuracy: 0.6142 - val_loss: 0.6477 - learning_rate: 2.0000e-05
Epoch 65/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6348 - loss: 0.6434 - val_accuracy: 0.6535 - val_loss: 0.6341 - learning_rate: 2.0000e-05
Epoch 66/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6601 - loss: 0.6315 - val_accuracy: 0.6220 - val_loss: 0.6451 - learning_rate: 2.0000e-05
Epoch 67/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6594 - loss: 0.6287 - val_accuracy: 0.6142 - val_loss: 0.6448 - learning_rate: 2.0000e-05
Epoch 68/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6401 - loss: 0.6455 - val_accuracy: 0.6509 - val_loss: 0.6352 - learning_rate: 2.0000e-05
Epoch 69/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6557 - loss: 0.6247 - val_accuracy: 0.6614 - val_loss: 0.6336 - learning_rate: 2.0000e-05
Epoch 70/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6438 - loss: 0.6361 - val_accuracy: 0.6457 - val_loss: 0.6351 - learning_rate: 2.0000e-05
Epoch 71/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6523 - loss: 0.6249 - val_accuracy: 0.6562 - val_loss: 0.6330 - learning_rate: 2.0000e-05
Epoch 72/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6324 - loss: 0.6630 - val_accuracy: 0.6352 - val_loss: 0.6466 - learning_rate: 2.0000e-05
Epoch 73/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6616 - loss: 0.6196 - val_accuracy: 0.6509 - val_loss: 0.6325 - learning_rate: 2.0000e-05
Epoch 74/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6664 - loss: 0.6191 - val_accuracy: 0.6457 - val_loss: 0.6376 - learning_rate: 2.0000e-05
Epoch 75/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6310 - loss: 0.6368 - val_accuracy: 0.6535 - val_loss: 0.6324 - learning_rate: 2.0000e-05
Epoch 76/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6638 - loss: 0.6248 - val_accuracy: 0.6378 - val_loss: 0.6359 - learning_rate: 2.0000e-05
Epoch 77/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6885 - loss: 0.6172 - val_accuracy: 0.6168 - val_loss: 0.6505 - learning_rate: 2.0000e-05
Epoch 78/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6411 - loss: 0.6394 - val_accuracy: 0.6299 - val_loss: 0.6360 - learning_rate: 2.0000e-05
Epoch 79/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6320 - loss: 0.6388 - val_accuracy: 0.6614 - val_loss: 0.6318 - learning_rate: 2.0000e-05
Epoch 80/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6563 - loss: 0.6246 - val_accuracy: 0.6378 - val_loss: 0.6448 - learning_rate: 2.0000e-05
Epoch 81/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6705 - loss: 0.6214 - val_accuracy: 0.6614 - val_loss: 0.6324 - learning_rate: 2.0000e-05
Epoch 82/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6338 - loss: 0.6347 - val_accuracy: 0.6404 - val_loss: 0.6351 - learning_rate: 2.0000e-05
Epoch 83/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6618 - loss: 0.6235 - val_accuracy: 0.6614 - val_loss: 0.6329 - learning_rate: 2.0000e-05
Epoch 84/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6684 - loss: 0.6247 - val_accuracy: 0.6247 - val_loss: 0.6371 - learning_rate: 2.0000e-05
Epoch 85/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6405 - loss: 0.6372 - val_accuracy: 0.6457 - val_loss: 0.6323 - learning_rate: 2.0000e-05
Epoch 86/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6776 - loss: 0.6227 - val_accuracy: 0.6667 - val_loss: 0.6303 - learning_rate: 2.0000e-05
Epoch 87/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6513 - loss: 0.6302 - val_accuracy: 0.6535 - val_loss: 0.6319 - learning_rate: 2.0000e-05
Epoch 88/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6502 - loss: 0.6339 - val_accuracy: 0.6404 - val_loss: 0.6370 - learning_rate: 2.0000e-05
Epoch 89/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6560 - loss: 0.6232 - val_accuracy: 0.6614 - val_loss: 0.6291 - learning_rate: 2.0000e-05
Epoch 90/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6895 - loss: 0.6016 - val_accuracy: 0.6430 - val_loss: 0.6343 - learning_rate: 2.0000e-05
Epoch 91/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6641 - loss: 0.6218 - val_accuracy: 0.6457 - val_loss: 0.6341 - learning_rate: 2.0000e-05
Epoch 92/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6430 - loss: 0.6223 - val_accuracy: 0.6772 - val_loss: 0.6279 - learning_rate: 2.0000e-05
Epoch 93/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6708 - loss: 0.6132 - val_accuracy: 0.6247 - val_loss: 0.6349 - learning_rate: 2.0000e-05
Epoch 94/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6901 - loss: 0.6114 - val_accuracy: 0.6693 - val_loss: 0.6273 - learning_rate: 2.0000e-05
Epoch 95/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6714 - loss: 0.6124 - val_accuracy: 0.6430 - val_loss: 0.6323 - learning_rate: 2.0000e-05
Epoch 96/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6669 - loss: 0.6210 - val_accuracy: 0.6483 - val_loss: 0.6293 - learning_rate: 2.0000e-05
Epoch 97/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6579 - loss: 0.6191 - val_accuracy: 0.6273 - val_loss: 0.6423 - learning_rate: 2.0000e-05
Epoch 98/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.6614 - loss: 0.6190 - val_accuracy: 0.6273 - val_loss: 0.6347 - learning_rate: 2.0000e-05
Epoch 99/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.6501 - loss: 0.6369 - val_accuracy: 0.6325 - val_loss: 0.6329 - learning_rate: 2.0000e-05
Epoch 100/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6748 - loss: 0.6075 - val_accuracy: 0.6614 - val_loss: 0.6258 - learning_rate: 2.0000e-05
Epoch 101/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6769 - loss: 0.6105 - val_accuracy: 0.6693 - val_loss: 0.6260 - learning_rate: 2.0000e-05
Epoch 102/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6504 - loss: 0.6362 - val_accuracy: 0.6273 - val_loss: 0.6288 - learning_rate: 2.0000e-05
Epoch 103/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6492 - loss: 0.6269 - val_accuracy: 0.6430 - val_loss: 0.6283 - learning_rate: 2.0000e-05
Epoch 104/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6696 - loss: 0.6211 - val_accuracy: 0.6719 - val_loss: 0.6252 - learning_rate: 2.0000e-05
Epoch 105/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6684 - loss: 0.6117 - val_accuracy: 0.6378 - val_loss: 0.6327 - learning_rate: 2.0000e-05
Epoch 106/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6608 - loss: 0.6231 - val_accuracy: 0.6404 - val_loss: 0.6284 - learning_rate: 2.0000e-05
Epoch 107/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6744 - loss: 0.6021 - val_accuracy: 0.6325 - val_loss: 0.6305 - learning_rate: 2.0000e-05
Epoch 108/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6428 - loss: 0.6158 - val_accuracy: 0.6614 - val_loss: 0.6226 - learning_rate: 2.0000e-05
Epoch 109/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6756 - loss: 0.6079 - val_accuracy: 0.6457 - val_loss: 0.6244 - learning_rate: 2.0000e-05
Epoch 110/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6713 - loss: 0.6026 - val_accuracy: 0.6352 - val_loss: 0.6315 - learning_rate: 2.0000e-05
Epoch 111/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6647 - loss: 0.6192 - val_accuracy: 0.6693 - val_loss: 0.6208 - learning_rate: 2.0000e-05
Epoch 112/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6730 - loss: 0.6003 - val_accuracy: 0.6457 - val_loss: 0.6228 - learning_rate: 2.0000e-05
Epoch 113/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6788 - loss: 0.6016 - val_accuracy: 0.6614 - val_loss: 0.6215 - learning_rate: 2.0000e-05
Epoch 114/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6765 - loss: 0.6102 - val_accuracy: 0.6457 - val_loss: 0.6272 - learning_rate: 2.0000e-05
Epoch 115/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6751 - loss: 0.6148 - val_accuracy: 0.6850 - val_loss: 0.6185 - learning_rate: 2.0000e-05
Epoch 116/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6774 - loss: 0.6143 - val_accuracy: 0.6509 - val_loss: 0.6237 - learning_rate: 2.0000e-05
Epoch 117/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6450 - loss: 0.6175 - val_accuracy: 0.6798 - val_loss: 0.6213 - learning_rate: 2.0000e-05
Epoch 118/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6811 - loss: 0.6098 - val_accuracy: 0.6614 - val_loss: 0.6206 - learning_rate: 2.0000e-05
Epoch 119/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6724 - loss: 0.6229 - val_accuracy: 0.6745 - val_loss: 0.6177 - learning_rate: 2.0000e-05
Epoch 120/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6897 - loss: 0.5978 - val_accuracy: 0.6640 - val_loss: 0.6185 - learning_rate: 2.0000e-05
Epoch 121/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6908 - loss: 0.5963 - val_accuracy: 0.6430 - val_loss: 0.6318 - learning_rate: 2.0000e-05
Epoch 122/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6688 - loss: 0.6384 - val_accuracy: 0.6562 - val_loss: 0.6180 - learning_rate: 2.0000e-05
Epoch 123/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6883 - loss: 0.6112 - val_accuracy: 0.6614 - val_loss: 0.6189 - learning_rate: 2.0000e-05
Epoch 124/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6718 - loss: 0.6042 - val_accuracy: 0.6404 - val_loss: 0.6351 - learning_rate: 2.0000e-05
Epoch 125/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6485 - loss: 0.6284 - val_accuracy: 0.6982 - val_loss: 0.6125 - learning_rate: 2.0000e-05
Epoch 126/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6860 - loss: 0.6064 - val_accuracy: 0.6903 - val_loss: 0.6132 - learning_rate: 2.0000e-05
Epoch 127/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6740 - loss: 0.6008 - val_accuracy: 0.6850 - val_loss: 0.6118 - learning_rate: 2.0000e-05
Epoch 128/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6794 - loss: 0.6067 - val_accuracy: 0.6955 - val_loss: 0.6111 - learning_rate: 2.0000e-05
Epoch 129/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6827 - loss: 0.6036 - val_accuracy: 0.6483 - val_loss: 0.6230 - learning_rate: 2.0000e-05
Epoch 130/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6968 - loss: 0.6048 - val_accuracy: 0.6562 - val_loss: 0.6277 - learning_rate: 2.0000e-05
Epoch 131/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6893 - loss: 0.6049 - val_accuracy: 0.6850 - val_loss: 0.6101 - learning_rate: 2.0000e-05
Epoch 132/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6837 - loss: 0.6057 - val_accuracy: 0.6772 - val_loss: 0.6115 - learning_rate: 2.0000e-05
Epoch 133/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6872 - loss: 0.6017 - val_accuracy: 0.6955 - val_loss: 0.6075 - learning_rate: 2.0000e-05
Epoch 134/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6725 - loss: 0.6068 - val_accuracy: 0.6509 - val_loss: 0.6231 - learning_rate: 2.0000e-05
Epoch 135/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6769 - loss: 0.6124 - val_accuracy: 0.6850 - val_loss: 0.6084 - learning_rate: 2.0000e-05
Epoch 136/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7081 - loss: 0.5938 - val_accuracy: 0.6483 - val_loss: 0.6153 - learning_rate: 2.0000e-05
Epoch 137/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6760 - loss: 0.6026 - val_accuracy: 0.6640 - val_loss: 0.6116 - learning_rate: 2.0000e-05
Epoch 138/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7107 - loss: 0.5838 - val_accuracy: 0.6640 - val_loss: 0.6247 - learning_rate: 2.0000e-05
Epoch 139/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6938 - loss: 0.6011 - val_accuracy: 0.6719 - val_loss: 0.6061 - learning_rate: 2.0000e-05
Epoch 140/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6853 - loss: 0.6054 - val_accuracy: 0.6588 - val_loss: 0.6192 - learning_rate: 2.0000e-05
Epoch 141/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6907 - loss: 0.5837 - val_accuracy: 0.6982 - val_loss: 0.6072 - learning_rate: 2.0000e-05
Epoch 142/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7009 - loss: 0.5920 - val_accuracy: 0.6955 - val_loss: 0.6042 - learning_rate: 2.0000e-05
Epoch 143/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6962 - loss: 0.5997 - val_accuracy: 0.6982 - val_loss: 0.6019 - learning_rate: 2.0000e-05
Epoch 144/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6956 - loss: 0.6034 - val_accuracy: 0.6982 - val_loss: 0.6021 - learning_rate: 2.0000e-05
Epoch 145/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6886 - loss: 0.6050 - val_accuracy: 0.6877 - val_loss: 0.6063 - learning_rate: 2.0000e-05
Epoch 146/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6733 - loss: 0.6132 - val_accuracy: 0.7060 - val_loss: 0.6034 - learning_rate: 2.0000e-05
Epoch 147/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6896 - loss: 0.6018 - val_accuracy: 0.7008 - val_loss: 0.6006 - learning_rate: 2.0000e-05
Epoch 148/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7036 - loss: 0.5937 - val_accuracy: 0.6824 - val_loss: 0.5996 - learning_rate: 2.0000e-05
Epoch 149/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6989 - loss: 0.6065 - val_accuracy: 0.6929 - val_loss: 0.6004 - learning_rate: 2.0000e-05
Epoch 150/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7073 - loss: 0.5878 - val_accuracy: 0.6982 - val_loss: 0.6068 - learning_rate: 2.0000e-05
Epoch 151/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6965 - loss: 0.5967 - val_accuracy: 0.6798 - val_loss: 0.6059 - learning_rate: 2.0000e-05
Epoch 152/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7125 - loss: 0.5977 - val_accuracy: 0.7139 - val_loss: 0.5982 - learning_rate: 2.0000e-05
Epoch 153/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6923 - loss: 0.6104 - val_accuracy: 0.6929 - val_loss: 0.5995 - learning_rate: 2.0000e-05
Epoch 154/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7031 - loss: 0.5899 - val_accuracy: 0.6850 - val_loss: 0.6000 - learning_rate: 2.0000e-05
Epoch 155/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6886 - loss: 0.6141 - val_accuracy: 0.6850 - val_loss: 0.6033 - learning_rate: 2.0000e-05
Epoch 156/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7067 - loss: 0.5975 - val_accuracy: 0.6640 - val_loss: 0.6116 - learning_rate: 2.0000e-05
Epoch 157/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7016 - loss: 0.5937 - val_accuracy: 0.6719 - val_loss: 0.6165 - learning_rate: 2.0000e-05
Epoch 158/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7134 - loss: 0.5733 - val_accuracy: 0.6719 - val_loss: 0.6140 - learning_rate: 2.0000e-05
Epoch 159/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7085 - loss: 0.5804 - val_accuracy: 0.6982 - val_loss: 0.5943 - learning_rate: 2.0000e-05
Epoch 160/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6894 - loss: 0.6018 - val_accuracy: 0.7034 - val_loss: 0.5937 - learning_rate: 2.0000e-05
Epoch 161/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7049 - loss: 0.5868 - val_accuracy: 0.6982 - val_loss: 0.5918 - learning_rate: 2.0000e-05
Epoch 162/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6682 - loss: 0.6145 - val_accuracy: 0.6693 - val_loss: 0.6120 - learning_rate: 2.0000e-05
Epoch 163/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6876 - loss: 0.5945 - val_accuracy: 0.6955 - val_loss: 0.5965 - learning_rate: 2.0000e-05
Epoch 164/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7173 - loss: 0.5820 - val_accuracy: 0.6667 - val_loss: 0.6173 - learning_rate: 2.0000e-05
Epoch 165/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6937 - loss: 0.5891 - val_accuracy: 0.6982 - val_loss: 0.5935 - learning_rate: 2.0000e-05
Epoch 166/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7041 - loss: 0.5898 - val_accuracy: 0.6667 - val_loss: 0.6119 - learning_rate: 2.0000e-05
Epoch 167/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6858 - loss: 0.5899 - val_accuracy: 0.6955 - val_loss: 0.5928 - learning_rate: 2.0000e-05
Epoch 168/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6914 - loss: 0.5802 - val_accuracy: 0.6798 - val_loss: 0.5966 - learning_rate: 2.0000e-05
Epoch 169/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7085 - loss: 0.5805 - val_accuracy: 0.6745 - val_loss: 0.6276 - learning_rate: 2.0000e-05
Epoch 170/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.6705 - loss: 0.5963 - val_accuracy: 0.6877 - val_loss: 0.5891 - learning_rate: 4.0000e-06
Epoch 171/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7276 - loss: 0.5734 - val_accuracy: 0.6903 - val_loss: 0.5872 - learning_rate: 4.0000e-06
Epoch 172/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7266 - loss: 0.5770 - val_accuracy: 0.6824 - val_loss: 0.5892 - learning_rate: 4.0000e-06
Epoch 173/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7034 - loss: 0.5976 - val_accuracy: 0.7060 - val_loss: 0.5869 - learning_rate: 4.0000e-06
Epoch 174/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7132 - loss: 0.5868 - val_accuracy: 0.7008 - val_loss: 0.5885 - learning_rate: 4.0000e-06
Epoch 175/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7296 - loss: 0.5797 - val_accuracy: 0.6850 - val_loss: 0.5885 - learning_rate: 4.0000e-06
Epoch 176/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7201 - loss: 0.5792 - val_accuracy: 0.6824 - val_loss: 0.5887 - learning_rate: 4.0000e-06
Epoch 177/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7358 - loss: 0.5757 - val_accuracy: 0.7060 - val_loss: 0.5867 - learning_rate: 4.0000e-06
Epoch 178/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7179 - loss: 0.5737 - val_accuracy: 0.6929 - val_loss: 0.5888 - learning_rate: 4.0000e-06
Epoch 179/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7025 - loss: 0.5950 - val_accuracy: 0.6903 - val_loss: 0.5864 - learning_rate: 4.0000e-06
Epoch 180/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7475 - loss: 0.5673 - val_accuracy: 0.6955 - val_loss: 0.5870 - learning_rate: 4.0000e-06
Epoch 181/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7064 - loss: 0.5793 - val_accuracy: 0.6955 - val_loss: 0.5862 - learning_rate: 4.0000e-06
Epoch 182/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7088 - loss: 0.5821 - val_accuracy: 0.7008 - val_loss: 0.5860 - learning_rate: 4.0000e-06
Epoch 183/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6941 - loss: 0.5860 - val_accuracy: 0.6824 - val_loss: 0.5949 - learning_rate: 4.0000e-06
Epoch 184/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7020 - loss: 0.5734 - val_accuracy: 0.6877 - val_loss: 0.5876 - learning_rate: 4.0000e-06
Epoch 185/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7203 - loss: 0.5667 - val_accuracy: 0.7087 - val_loss: 0.5853 - learning_rate: 4.0000e-06
Epoch 186/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7082 - loss: 0.5770 - val_accuracy: 0.7165 - val_loss: 0.5860 - learning_rate: 4.0000e-06
Epoch 187/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.6967 - loss: 0.6000 - val_accuracy: 0.7087 - val_loss: 0.5852 - learning_rate: 4.0000e-06
Epoch 188/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7189 - loss: 0.5990 - val_accuracy: 0.7008 - val_loss: 0.5861 - learning_rate: 4.0000e-06
Epoch 189/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 53ms/step - accuracy: 0.7237 - loss: 0.5696 - val_accuracy: 0.6850 - val_loss: 0.5887 - learning_rate: 4.0000e-06
Epoch 190/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7080 - loss: 0.5765 - val_accuracy: 0.6877 - val_loss: 0.5911 - learning_rate: 4.0000e-06
Epoch 191/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.7037 - loss: 0.5728 - val_accuracy: 0.6850 - val_loss: 0.5870 - learning_rate: 4.0000e-06
Epoch 192/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7017 - loss: 0.5788 - val_accuracy: 0.7087 - val_loss: 0.5850 - learning_rate: 4.0000e-06
Epoch 193/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7026 - loss: 0.5990 - val_accuracy: 0.7113 - val_loss: 0.5847 - learning_rate: 4.0000e-06
Epoch 194/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7375 - loss: 0.5568 - val_accuracy: 0.7113 - val_loss: 0.5850 - learning_rate: 4.0000e-06
Epoch 195/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7080 - loss: 0.5741 - val_accuracy: 0.7087 - val_loss: 0.5856 - learning_rate: 4.0000e-06
Epoch 196/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.7180 - loss: 0.5715 - val_accuracy: 0.6982 - val_loss: 0.5853 - learning_rate: 4.0000e-06
Epoch 197/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.6928 - loss: 0.5747 - val_accuracy: 0.7034 - val_loss: 0.5839 - learning_rate: 4.0000e-06
Epoch 198/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 55ms/step - accuracy: 0.7428 - loss: 0.5400 - val_accuracy: 0.7113 - val_loss: 0.5841 - learning_rate: 4.0000e-06
Epoch 199/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7117 - loss: 0.5904 - val_accuracy: 0.6955 - val_loss: 0.5848 - learning_rate: 4.0000e-06
Epoch 200/200
96/96 ━━━━━━━━━━━━━━━━━━━━ 11s 54ms/step - accuracy: 0.7190 - loss: 0.5604 - val_accuracy: 0.6982 - val_loss: 0.5837 - learning_rate: 4.0000e-06
InΒ [Β ]:
# Plot training & validation accuracy values
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')

# Plot training & validation loss values
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')

plt.tight_layout()
plt.show();
No description has been provided for this image

The model plotting above shows the training and validation closely following each other. This indicates that the model is not overfitting and is learning the underlying patterns in the data. This model was trained in several iterations of this notebook, and in this iteration it looks like it could train further with fine tuning as the model was still increasing and the losses were both decreasing.

InΒ [Β ]:
y_true = []
y_pred = []

for images, labels in val_dataset:
    predictions = classification_model.predict(images)
    y_pred.extend((predictions > 0.5).astype(int).flatten())
    y_true.extend(labels.numpy().flatten())

cm = confusion_matrix(y_true, y_pred)

plt.figure(figsize=(8, 6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show();
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 53ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 50ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 100ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 47ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 61ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 48ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 33ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
No description has been provided for this image

The model had about equal numbers of mislcassifiation for both classes, indicating that the model is not biased towards one class.

VGG16 Model

The model below is the second classifier trained on this dataset to compare to our basic classifier. I recreated the structure of the famous VGG16 model to train on the diabetic retinopathy dataset. The model is trained for 160 epochs with a learning rate scheduler based on the validation loss.

InΒ [Β ]:
# Create a vgg16 model with sigmoid activation for binary classification
def create_vgg16(input_shape=(224, 224, 3), num_classes=1):
    model = models.Sequential([
        # Block 1
        layers.Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=input_shape),
        layers.Conv2D(64, (3, 3), activation='relu', padding='same'),
        layers.MaxPooling2D((2, 2), strides=(2, 2)),

        # Block 2
        layers.Conv2D(128, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(128, (3, 3), activation='relu', padding='same'),
        layers.MaxPooling2D((2, 2), strides=(2, 2)),

        # Block 3
        layers.Conv2D(256, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(256, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(256, (3, 3), activation='relu', padding='same'),
        layers.MaxPooling2D((2, 2), strides=(2, 2)),

        # Block 4
        layers.Conv2D(512, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(512, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(512, (3, 3), activation='relu', padding='same'),
        layers.MaxPooling2D((2, 2), strides=(2, 2)),

        # Block 5
        layers.Conv2D(512, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(512, (3, 3), activation='relu', padding='same'),
        layers.Conv2D(512, (3, 3), activation='relu', padding='same'),
        layers.MaxPooling2D((2, 2), strides=(2, 2)),

        # Classification block
        layers.Flatten(),
        layers.Dense(4096, activation='relu'),
        layers.Dropout(0.5),
        layers.Dense(4096, activation='relu'),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation='sigmoid' if num_classes == 1 else 'softmax')
    ])

    return model

# Create the model
vgg_model = create_vgg16()

# Compile the model
optimizer = Adam(learning_rate=0.00001)
vgg_model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])

# Define callbacks
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=8, min_lr=0.0000001)
early_stopping = EarlyStopping(
    monitor='val_loss',
    patience=15,
    restore_best_weights=True,
    verbose=1
)

callbacks = [reduce_lr]

# Train the model
history = vgg_model.fit(
    train_dataset,
    validation_data=val_dataset,
    epochs=160,
    callbacks=callbacks
)

# Update saved weights
vgg_model.save(f'{BASE_DIR}vgg_model.keras')
Epoch 1/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 23s 109ms/step - accuracy: 0.4917 - loss: 0.6933 - val_accuracy: 0.5171 - val_loss: 0.6931 - learning_rate: 1.0000e-05
Epoch 2/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.5020 - loss: 0.6930 - val_accuracy: 0.4829 - val_loss: 0.6932 - learning_rate: 1.0000e-05
Epoch 3/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.5172 - loss: 0.6934 - val_accuracy: 0.5512 - val_loss: 0.6920 - learning_rate: 1.0000e-05
Epoch 4/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.5412 - loss: 0.6909 - val_accuracy: 0.6168 - val_loss: 0.6656 - learning_rate: 1.0000e-05
Epoch 5/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.5629 - loss: 0.6809 - val_accuracy: 0.5591 - val_loss: 0.6785 - learning_rate: 1.0000e-05
Epoch 6/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.5596 - loss: 0.6819 - val_accuracy: 0.6325 - val_loss: 0.6633 - learning_rate: 1.0000e-05
Epoch 7/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.5941 - loss: 0.6725 - val_accuracy: 0.6115 - val_loss: 0.6630 - learning_rate: 1.0000e-05
Epoch 8/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.6173 - loss: 0.6698 - val_accuracy: 0.6273 - val_loss: 0.6547 - learning_rate: 1.0000e-05
Epoch 9/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.6119 - loss: 0.6648 - val_accuracy: 0.5984 - val_loss: 0.6794 - learning_rate: 1.0000e-05
Epoch 10/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.6156 - loss: 0.6635 - val_accuracy: 0.6010 - val_loss: 0.6629 - learning_rate: 1.0000e-05
Epoch 11/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.5959 - loss: 0.6565 - val_accuracy: 0.6168 - val_loss: 0.6588 - learning_rate: 1.0000e-05
Epoch 12/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.6243 - loss: 0.6501 - val_accuracy: 0.6247 - val_loss: 0.6485 - learning_rate: 1.0000e-05
Epoch 13/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.6308 - loss: 0.6395 - val_accuracy: 0.6299 - val_loss: 0.6494 - learning_rate: 1.0000e-05
Epoch 14/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.6549 - loss: 0.6201 - val_accuracy: 0.6299 - val_loss: 0.6379 - learning_rate: 1.0000e-05
Epoch 15/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.6588 - loss: 0.6206 - val_accuracy: 0.6299 - val_loss: 0.6478 - learning_rate: 1.0000e-05
Epoch 16/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.6484 - loss: 0.6276 - val_accuracy: 0.6404 - val_loss: 0.6435 - learning_rate: 1.0000e-05
Epoch 17/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.6871 - loss: 0.6071 - val_accuracy: 0.6509 - val_loss: 0.6355 - learning_rate: 1.0000e-05
Epoch 18/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.6647 - loss: 0.6020 - val_accuracy: 0.6562 - val_loss: 0.6248 - learning_rate: 1.0000e-05
Epoch 19/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.6960 - loss: 0.5923 - val_accuracy: 0.6719 - val_loss: 0.6093 - learning_rate: 1.0000e-05
Epoch 20/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.6773 - loss: 0.6144 - val_accuracy: 0.6667 - val_loss: 0.6192 - learning_rate: 1.0000e-05
Epoch 21/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.6865 - loss: 0.5914 - val_accuracy: 0.6588 - val_loss: 0.6097 - learning_rate: 1.0000e-05
Epoch 22/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7150 - loss: 0.5732 - val_accuracy: 0.6745 - val_loss: 0.5921 - learning_rate: 1.0000e-05
Epoch 23/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7274 - loss: 0.5456 - val_accuracy: 0.6903 - val_loss: 0.5800 - learning_rate: 1.0000e-05
Epoch 24/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7242 - loss: 0.5628 - val_accuracy: 0.7087 - val_loss: 0.5871 - learning_rate: 1.0000e-05
Epoch 25/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.7381 - loss: 0.5143 - val_accuracy: 0.6929 - val_loss: 0.5714 - learning_rate: 1.0000e-05
Epoch 26/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7229 - loss: 0.5483 - val_accuracy: 0.7034 - val_loss: 0.5626 - learning_rate: 1.0000e-05
Epoch 27/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7393 - loss: 0.5202 - val_accuracy: 0.7139 - val_loss: 0.5461 - learning_rate: 1.0000e-05
Epoch 28/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7676 - loss: 0.5052 - val_accuracy: 0.7060 - val_loss: 0.5668 - learning_rate: 1.0000e-05
Epoch 29/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7567 - loss: 0.5108 - val_accuracy: 0.7218 - val_loss: 0.5540 - learning_rate: 1.0000e-05
Epoch 30/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7714 - loss: 0.4764 - val_accuracy: 0.7297 - val_loss: 0.5566 - learning_rate: 1.0000e-05
Epoch 31/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7653 - loss: 0.4869 - val_accuracy: 0.7270 - val_loss: 0.5405 - learning_rate: 1.0000e-05
Epoch 32/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7734 - loss: 0.4594 - val_accuracy: 0.6903 - val_loss: 0.5970 - learning_rate: 1.0000e-05
Epoch 33/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7754 - loss: 0.4808 - val_accuracy: 0.7375 - val_loss: 0.5244 - learning_rate: 1.0000e-05
Epoch 34/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7989 - loss: 0.4490 - val_accuracy: 0.7349 - val_loss: 0.5477 - learning_rate: 1.0000e-05
Epoch 35/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7912 - loss: 0.4393 - val_accuracy: 0.7349 - val_loss: 0.5267 - learning_rate: 1.0000e-05
Epoch 36/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.8016 - loss: 0.4372 - val_accuracy: 0.7349 - val_loss: 0.5604 - learning_rate: 1.0000e-05
Epoch 37/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8155 - loss: 0.3963 - val_accuracy: 0.7165 - val_loss: 0.5455 - learning_rate: 1.0000e-05
Epoch 38/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.7947 - loss: 0.4388 - val_accuracy: 0.7060 - val_loss: 0.5694 - learning_rate: 1.0000e-05
Epoch 39/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.7848 - loss: 0.4642 - val_accuracy: 0.7375 - val_loss: 0.5120 - learning_rate: 1.0000e-05
Epoch 40/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.8185 - loss: 0.4171 - val_accuracy: 0.7349 - val_loss: 0.5288 - learning_rate: 1.0000e-05
Epoch 41/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8346 - loss: 0.3791 - val_accuracy: 0.7585 - val_loss: 0.5404 - learning_rate: 1.0000e-05
Epoch 42/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.8373 - loss: 0.3775 - val_accuracy: 0.7402 - val_loss: 0.5453 - learning_rate: 1.0000e-05
Epoch 43/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8388 - loss: 0.3639 - val_accuracy: 0.7402 - val_loss: 0.5399 - learning_rate: 1.0000e-05
Epoch 44/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 76ms/step - accuracy: 0.8380 - loss: 0.3824 - val_accuracy: 0.7664 - val_loss: 0.5350 - learning_rate: 1.0000e-05
Epoch 45/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8441 - loss: 0.3478 - val_accuracy: 0.7664 - val_loss: 0.5067 - learning_rate: 1.0000e-05
Epoch 46/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.8332 - loss: 0.3619 - val_accuracy: 0.7559 - val_loss: 0.5457 - learning_rate: 1.0000e-05
Epoch 47/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.8510 - loss: 0.3461 - val_accuracy: 0.7507 - val_loss: 0.5834 - learning_rate: 1.0000e-05
Epoch 48/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8388 - loss: 0.3681 - val_accuracy: 0.7559 - val_loss: 0.5572 - learning_rate: 1.0000e-05
Epoch 49/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.8592 - loss: 0.3316 - val_accuracy: 0.7559 - val_loss: 0.5394 - learning_rate: 1.0000e-05
Epoch 50/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8745 - loss: 0.3047 - val_accuracy: 0.7218 - val_loss: 0.5768 - learning_rate: 1.0000e-05
Epoch 51/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8454 - loss: 0.3474 - val_accuracy: 0.7480 - val_loss: 0.5616 - learning_rate: 1.0000e-05
Epoch 52/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.8672 - loss: 0.3004 - val_accuracy: 0.7454 - val_loss: 0.6276 - learning_rate: 1.0000e-05
Epoch 53/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.8754 - loss: 0.3263 - val_accuracy: 0.7795 - val_loss: 0.5466 - learning_rate: 1.0000e-05
Epoch 54/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.8962 - loss: 0.2576 - val_accuracy: 0.7638 - val_loss: 0.5622 - learning_rate: 2.0000e-06
Epoch 55/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8900 - loss: 0.2481 - val_accuracy: 0.7717 - val_loss: 0.5555 - learning_rate: 2.0000e-06
Epoch 56/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.8939 - loss: 0.2591 - val_accuracy: 0.7507 - val_loss: 0.5688 - learning_rate: 2.0000e-06
Epoch 57/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9083 - loss: 0.2277 - val_accuracy: 0.7375 - val_loss: 0.6224 - learning_rate: 2.0000e-06
Epoch 58/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9008 - loss: 0.2347 - val_accuracy: 0.7743 - val_loss: 0.5822 - learning_rate: 2.0000e-06
Epoch 59/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8990 - loss: 0.2285 - val_accuracy: 0.7717 - val_loss: 0.5923 - learning_rate: 2.0000e-06
Epoch 60/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.8965 - loss: 0.2401 - val_accuracy: 0.7612 - val_loss: 0.6039 - learning_rate: 2.0000e-06
Epoch 61/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9098 - loss: 0.2113 - val_accuracy: 0.7690 - val_loss: 0.5891 - learning_rate: 2.0000e-06
Epoch 62/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9123 - loss: 0.2249 - val_accuracy: 0.7664 - val_loss: 0.5877 - learning_rate: 4.0000e-07
Epoch 63/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9051 - loss: 0.2190 - val_accuracy: 0.7690 - val_loss: 0.5892 - learning_rate: 4.0000e-07
Epoch 64/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9173 - loss: 0.2066 - val_accuracy: 0.7664 - val_loss: 0.5932 - learning_rate: 4.0000e-07
Epoch 65/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9261 - loss: 0.2042 - val_accuracy: 0.7638 - val_loss: 0.5933 - learning_rate: 4.0000e-07
Epoch 66/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9141 - loss: 0.2162 - val_accuracy: 0.7690 - val_loss: 0.5942 - learning_rate: 4.0000e-07
Epoch 67/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9154 - loss: 0.2214 - val_accuracy: 0.7612 - val_loss: 0.5970 - learning_rate: 4.0000e-07
Epoch 68/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9045 - loss: 0.2410 - val_accuracy: 0.7690 - val_loss: 0.5926 - learning_rate: 4.0000e-07
Epoch 69/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9141 - loss: 0.2060 - val_accuracy: 0.7612 - val_loss: 0.6005 - learning_rate: 4.0000e-07
Epoch 70/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9055 - loss: 0.2256 - val_accuracy: 0.7638 - val_loss: 0.5979 - learning_rate: 1.0000e-07
Epoch 71/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9229 - loss: 0.1932 - val_accuracy: 0.7664 - val_loss: 0.5980 - learning_rate: 1.0000e-07
Epoch 72/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9122 - loss: 0.2186 - val_accuracy: 0.7638 - val_loss: 0.5987 - learning_rate: 1.0000e-07
Epoch 73/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9272 - loss: 0.1723 - val_accuracy: 0.7664 - val_loss: 0.5993 - learning_rate: 1.0000e-07
Epoch 74/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9224 - loss: 0.1813 - val_accuracy: 0.7664 - val_loss: 0.5993 - learning_rate: 1.0000e-07
Epoch 75/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9279 - loss: 0.1885 - val_accuracy: 0.7664 - val_loss: 0.5988 - learning_rate: 1.0000e-07
Epoch 76/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9338 - loss: 0.1954 - val_accuracy: 0.7638 - val_loss: 0.5993 - learning_rate: 1.0000e-07
Epoch 77/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9276 - loss: 0.1958 - val_accuracy: 0.7638 - val_loss: 0.6013 - learning_rate: 1.0000e-07
Epoch 78/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9285 - loss: 0.2038 - val_accuracy: 0.7664 - val_loss: 0.6018 - learning_rate: 1.0000e-07
Epoch 79/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9244 - loss: 0.1937 - val_accuracy: 0.7638 - val_loss: 0.6018 - learning_rate: 1.0000e-07
Epoch 80/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9011 - loss: 0.2283 - val_accuracy: 0.7690 - val_loss: 0.5997 - learning_rate: 1.0000e-07
Epoch 81/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9163 - loss: 0.2063 - val_accuracy: 0.7664 - val_loss: 0.6019 - learning_rate: 1.0000e-07
Epoch 82/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9231 - loss: 0.1950 - val_accuracy: 0.7664 - val_loss: 0.6036 - learning_rate: 1.0000e-07
Epoch 83/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9107 - loss: 0.2252 - val_accuracy: 0.7664 - val_loss: 0.6023 - learning_rate: 1.0000e-07
Epoch 84/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9199 - loss: 0.1994 - val_accuracy: 0.7664 - val_loss: 0.6035 - learning_rate: 1.0000e-07
Epoch 85/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9288 - loss: 0.1988 - val_accuracy: 0.7664 - val_loss: 0.6018 - learning_rate: 1.0000e-07
Epoch 86/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9349 - loss: 0.1816 - val_accuracy: 0.7664 - val_loss: 0.6032 - learning_rate: 1.0000e-07
Epoch 87/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9274 - loss: 0.2072 - val_accuracy: 0.7585 - val_loss: 0.6051 - learning_rate: 1.0000e-07
Epoch 88/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9207 - loss: 0.1914 - val_accuracy: 0.7612 - val_loss: 0.6060 - learning_rate: 1.0000e-07
Epoch 89/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9264 - loss: 0.1974 - val_accuracy: 0.7638 - val_loss: 0.6051 - learning_rate: 1.0000e-07
Epoch 90/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9164 - loss: 0.2149 - val_accuracy: 0.7664 - val_loss: 0.6047 - learning_rate: 1.0000e-07
Epoch 91/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9202 - loss: 0.2087 - val_accuracy: 0.7664 - val_loss: 0.6056 - learning_rate: 1.0000e-07
Epoch 92/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9312 - loss: 0.1970 - val_accuracy: 0.7585 - val_loss: 0.6083 - learning_rate: 1.0000e-07
Epoch 93/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9226 - loss: 0.1968 - val_accuracy: 0.7664 - val_loss: 0.6070 - learning_rate: 1.0000e-07
Epoch 94/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9030 - loss: 0.2249 - val_accuracy: 0.7664 - val_loss: 0.6062 - learning_rate: 1.0000e-07
Epoch 95/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 72ms/step - accuracy: 0.9308 - loss: 0.1881 - val_accuracy: 0.7664 - val_loss: 0.6070 - learning_rate: 1.0000e-07
Epoch 96/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9242 - loss: 0.1982 - val_accuracy: 0.7585 - val_loss: 0.6081 - learning_rate: 1.0000e-07
Epoch 97/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9222 - loss: 0.2059 - val_accuracy: 0.7612 - val_loss: 0.6069 - learning_rate: 1.0000e-07
Epoch 98/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9194 - loss: 0.2124 - val_accuracy: 0.7638 - val_loss: 0.6062 - learning_rate: 1.0000e-07
Epoch 99/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9209 - loss: 0.2039 - val_accuracy: 0.7664 - val_loss: 0.6072 - learning_rate: 1.0000e-07
Epoch 100/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9252 - loss: 0.2067 - val_accuracy: 0.7612 - val_loss: 0.6089 - learning_rate: 1.0000e-07
Epoch 101/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9205 - loss: 0.1993 - val_accuracy: 0.7612 - val_loss: 0.6094 - learning_rate: 1.0000e-07
Epoch 102/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9278 - loss: 0.1879 - val_accuracy: 0.7638 - val_loss: 0.6090 - learning_rate: 1.0000e-07
Epoch 103/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9317 - loss: 0.1862 - val_accuracy: 0.7638 - val_loss: 0.6096 - learning_rate: 1.0000e-07
Epoch 104/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9216 - loss: 0.2040 - val_accuracy: 0.7533 - val_loss: 0.6150 - learning_rate: 1.0000e-07
Epoch 105/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9280 - loss: 0.1874 - val_accuracy: 0.7638 - val_loss: 0.6124 - learning_rate: 1.0000e-07
Epoch 106/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9235 - loss: 0.1906 - val_accuracy: 0.7585 - val_loss: 0.6136 - learning_rate: 1.0000e-07
Epoch 107/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9187 - loss: 0.1918 - val_accuracy: 0.7612 - val_loss: 0.6126 - learning_rate: 1.0000e-07
Epoch 108/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9178 - loss: 0.1957 - val_accuracy: 0.7612 - val_loss: 0.6129 - learning_rate: 1.0000e-07
Epoch 109/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9257 - loss: 0.1941 - val_accuracy: 0.7585 - val_loss: 0.6143 - learning_rate: 1.0000e-07
Epoch 110/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9199 - loss: 0.2000 - val_accuracy: 0.7559 - val_loss: 0.6135 - learning_rate: 1.0000e-07
Epoch 111/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9351 - loss: 0.1845 - val_accuracy: 0.7585 - val_loss: 0.6174 - learning_rate: 1.0000e-07
Epoch 112/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9252 - loss: 0.2049 - val_accuracy: 0.7559 - val_loss: 0.6153 - learning_rate: 1.0000e-07
Epoch 113/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9306 - loss: 0.1827 - val_accuracy: 0.7559 - val_loss: 0.6150 - learning_rate: 1.0000e-07
Epoch 114/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9241 - loss: 0.2058 - val_accuracy: 0.7559 - val_loss: 0.6134 - learning_rate: 1.0000e-07
Epoch 115/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9191 - loss: 0.2285 - val_accuracy: 0.7638 - val_loss: 0.6115 - learning_rate: 1.0000e-07
Epoch 116/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9279 - loss: 0.1984 - val_accuracy: 0.7585 - val_loss: 0.6124 - learning_rate: 1.0000e-07
Epoch 117/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9305 - loss: 0.2045 - val_accuracy: 0.7585 - val_loss: 0.6128 - learning_rate: 1.0000e-07
Epoch 118/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9281 - loss: 0.1832 - val_accuracy: 0.7585 - val_loss: 0.6143 - learning_rate: 1.0000e-07
Epoch 119/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9282 - loss: 0.1903 - val_accuracy: 0.7559 - val_loss: 0.6139 - learning_rate: 1.0000e-07
Epoch 120/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9206 - loss: 0.1980 - val_accuracy: 0.7559 - val_loss: 0.6135 - learning_rate: 1.0000e-07
Epoch 121/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9360 - loss: 0.1834 - val_accuracy: 0.7638 - val_loss: 0.6122 - learning_rate: 1.0000e-07
Epoch 122/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9189 - loss: 0.2146 - val_accuracy: 0.7585 - val_loss: 0.6137 - learning_rate: 1.0000e-07
Epoch 123/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9175 - loss: 0.2036 - val_accuracy: 0.7559 - val_loss: 0.6143 - learning_rate: 1.0000e-07
Epoch 124/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9189 - loss: 0.1890 - val_accuracy: 0.7559 - val_loss: 0.6158 - learning_rate: 1.0000e-07
Epoch 125/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9301 - loss: 0.1872 - val_accuracy: 0.7612 - val_loss: 0.6134 - learning_rate: 1.0000e-07
Epoch 126/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9212 - loss: 0.2066 - val_accuracy: 0.7638 - val_loss: 0.6137 - learning_rate: 1.0000e-07
Epoch 127/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9321 - loss: 0.1952 - val_accuracy: 0.7612 - val_loss: 0.6152 - learning_rate: 1.0000e-07
Epoch 128/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9253 - loss: 0.1900 - val_accuracy: 0.7585 - val_loss: 0.6145 - learning_rate: 1.0000e-07
Epoch 129/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9204 - loss: 0.2077 - val_accuracy: 0.7533 - val_loss: 0.6165 - learning_rate: 1.0000e-07
Epoch 130/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9321 - loss: 0.1747 - val_accuracy: 0.7533 - val_loss: 0.6170 - learning_rate: 1.0000e-07
Epoch 131/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9296 - loss: 0.1885 - val_accuracy: 0.7585 - val_loss: 0.6176 - learning_rate: 1.0000e-07
Epoch 132/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9225 - loss: 0.2031 - val_accuracy: 0.7559 - val_loss: 0.6151 - learning_rate: 1.0000e-07
Epoch 133/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9248 - loss: 0.1893 - val_accuracy: 0.7559 - val_loss: 0.6173 - learning_rate: 1.0000e-07
Epoch 134/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 72ms/step - accuracy: 0.9133 - loss: 0.2078 - val_accuracy: 0.7533 - val_loss: 0.6177 - learning_rate: 1.0000e-07
Epoch 135/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9203 - loss: 0.1932 - val_accuracy: 0.7559 - val_loss: 0.6166 - learning_rate: 1.0000e-07
Epoch 136/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9216 - loss: 0.2122 - val_accuracy: 0.7585 - val_loss: 0.6188 - learning_rate: 1.0000e-07
Epoch 137/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9296 - loss: 0.1789 - val_accuracy: 0.7585 - val_loss: 0.6199 - learning_rate: 1.0000e-07
Epoch 138/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9103 - loss: 0.2216 - val_accuracy: 0.7612 - val_loss: 0.6183 - learning_rate: 1.0000e-07
Epoch 139/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9338 - loss: 0.1828 - val_accuracy: 0.7638 - val_loss: 0.6173 - learning_rate: 1.0000e-07
Epoch 140/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9202 - loss: 0.2006 - val_accuracy: 0.7612 - val_loss: 0.6157 - learning_rate: 1.0000e-07
Epoch 141/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9237 - loss: 0.2060 - val_accuracy: 0.7612 - val_loss: 0.6180 - learning_rate: 1.0000e-07
Epoch 142/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9431 - loss: 0.1800 - val_accuracy: 0.7612 - val_loss: 0.6207 - learning_rate: 1.0000e-07
Epoch 143/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9228 - loss: 0.1954 - val_accuracy: 0.7559 - val_loss: 0.6187 - learning_rate: 1.0000e-07
Epoch 144/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9297 - loss: 0.1842 - val_accuracy: 0.7559 - val_loss: 0.6191 - learning_rate: 1.0000e-07
Epoch 145/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9204 - loss: 0.2015 - val_accuracy: 0.7559 - val_loss: 0.6187 - learning_rate: 1.0000e-07
Epoch 146/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9064 - loss: 0.2141 - val_accuracy: 0.7559 - val_loss: 0.6191 - learning_rate: 1.0000e-07
Epoch 147/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9140 - loss: 0.2008 - val_accuracy: 0.7559 - val_loss: 0.6181 - learning_rate: 1.0000e-07
Epoch 148/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9355 - loss: 0.1862 - val_accuracy: 0.7612 - val_loss: 0.6145 - learning_rate: 1.0000e-07
Epoch 149/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9330 - loss: 0.1716 - val_accuracy: 0.7585 - val_loss: 0.6179 - learning_rate: 1.0000e-07
Epoch 150/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9406 - loss: 0.1699 - val_accuracy: 0.7559 - val_loss: 0.6188 - learning_rate: 1.0000e-07
Epoch 151/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9152 - loss: 0.2072 - val_accuracy: 0.7559 - val_loss: 0.6210 - learning_rate: 1.0000e-07
Epoch 152/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9331 - loss: 0.1892 - val_accuracy: 0.7559 - val_loss: 0.6221 - learning_rate: 1.0000e-07
Epoch 153/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9349 - loss: 0.1880 - val_accuracy: 0.7612 - val_loss: 0.6200 - learning_rate: 1.0000e-07
Epoch 154/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9255 - loss: 0.1956 - val_accuracy: 0.7533 - val_loss: 0.6212 - learning_rate: 1.0000e-07
Epoch 155/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9363 - loss: 0.1817 - val_accuracy: 0.7533 - val_loss: 0.6201 - learning_rate: 1.0000e-07
Epoch 156/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9255 - loss: 0.2122 - val_accuracy: 0.7480 - val_loss: 0.6215 - learning_rate: 1.0000e-07
Epoch 157/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 73ms/step - accuracy: 0.9234 - loss: 0.1871 - val_accuracy: 0.7533 - val_loss: 0.6206 - learning_rate: 1.0000e-07
Epoch 158/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 76ms/step - accuracy: 0.9216 - loss: 0.2055 - val_accuracy: 0.7559 - val_loss: 0.6196 - learning_rate: 1.0000e-07
Epoch 159/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 74ms/step - accuracy: 0.9382 - loss: 0.1704 - val_accuracy: 0.7585 - val_loss: 0.6207 - learning_rate: 1.0000e-07
Epoch 160/160
96/96 ━━━━━━━━━━━━━━━━━━━━ 13s 75ms/step - accuracy: 0.9294 - loss: 0.1930 - val_accuracy: 0.7480 - val_loss: 0.6186 - learning_rate: 1.0000e-07
InΒ [Β ]:
# Plot training & validation accuracy values
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')

# Plot training & validation loss values
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')

plt.tight_layout()
plt.show();
No description has been provided for this image

The plotting above shows some overfitting to the training data. When I had trained this model and the other model in previous iterations of this notebook it seemed both had issues of getting over the .78 level for the validation accuracy, even when the training accuracy was much higher. This could be due to some of the images in the valdiation set having some unique features that the model was not able to generalize to. Overall both classifiers train well, and I let it train multiple times to learn the behavior of the training regarding learining rate and epochs.

InΒ [Β ]:
y_true = []
y_pred = []

for images, labels in val_dataset:
    predictions = vgg_model.predict(images)
    y_pred.extend((predictions > 0.5).astype(int).flatten())
    y_true.extend(labels.numpy().flatten())

cm = confusion_matrix(y_true, y_pred)

plt.figure(figsize=(8, 6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show();
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 52ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 47ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 50ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 47ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 47ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step
No description has been provided for this image

This model's confusion matrix is similar to the previous one where it was balances in terms of misclassification for both classes. This had been a trend in multiple iterations of this notebook.

StylexGeneratorΒΆ

Next in this notebook, I tried to implement a simpler version of the StyleGAN model to generate synthetic images of eyes. The model is trained on the BRSET dataset to generate realistic eye images. The generator is trained for 200 epochs to synthesize images that resemble the real eye images in the dataset. This is no easy task, as the model is trained from scratch and the images are quite complex.

InΒ [Β ]:
# StyleGAN simple implementation for generating fundus images
class StylexGenerator(tf.keras.Model):
    def __init__(self, latent_dim, img_shape, **kwargs):
        super(StylexGenerator, self).__init__(**kwargs)
        self.latent_dim = latent_dim
        self.img_shape = img_shape
        self.model = models.Sequential([
            layers.Dense(256, input_dim=latent_dim),
            layers.LeakyReLU(alpha=0.2),
            layers.BatchNormalization(momentum=0.8),
            layers.Dense(512),
            layers.LeakyReLU(alpha=0.2),
            layers.BatchNormalization(momentum=0.8),
            layers.Dense(1024),
            layers.LeakyReLU(alpha=0.2),
            layers.BatchNormalization(momentum=0.8),
            layers.Dense(int(tf.math.reduce_prod(img_shape)), activation='tanh'),
            layers.Reshape(img_shape)
        ])

    def call(self, z):
        return self.model(z)

    def get_config(self):
        config = super(StylexGenerator, self).get_config()
        config.update({
            "latent_dim": self.latent_dim,
            "img_shape": self.img_shape
        })
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# Discriminator model
class StylexDiscriminator(tf.keras.Model):
    def __init__(self, img_shape, **kwargs):
        super(StylexDiscriminator, self).__init__(**kwargs)
        self.img_shape = img_shape
        self.model = models.Sequential([
            layers.Flatten(input_shape=img_shape),
            layers.Dense(512),
            layers.LeakyReLU(alpha=0.2),
            layers.Dense(256),
            layers.LeakyReLU(alpha=0.2),
            layers.Dense(1, activation='sigmoid')
        ])

    def call(self, img):
        return self.model(img)

    def get_config(self):
        config = super(StylexDiscriminator, self).get_config()
        config.update({
            "img_shape": self.img_shape
        })
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# Define loss functions
def generator_loss(fake_output):
    return tf.keras.losses.binary_crossentropy(tf.ones_like(fake_output), fake_output)

def discriminator_loss(real_output, fake_output):
    real_loss = tf.keras.losses.binary_crossentropy(tf.ones_like(real_output), real_output)
    fake_loss = tf.keras.losses.binary_crossentropy(tf.zeros_like(fake_output), fake_output)
    total_loss = real_loss + fake_loss
    return total_loss

def classifier_loss(labels, cls_outputs):
    return tf.keras.losses.binary_crossentropy(labels, cls_outputs)

# Define the train_step function
@tf.function
def train_step(real_images, labels, generator, discriminator, classifier, gen_optimizer, disc_optimizer, threshold=0.5):
    batch_size = tf.shape(real_images)[0]
    noise = tf.random.normal([batch_size, latent_dim])

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
        generated_images = generator(noise, training=True)

        real_output = discriminator(real_images, training=True)
        fake_output = discriminator(generated_images, training=True)

        gen_loss = tf.reduce_mean(generator_loss(fake_output))
        disc_loss = tf.reduce_mean(discriminator_loss(real_output, fake_output))

        # Classifier guidance loss
        cls_outputs = classifier(generated_images, training=False)
        cls_outputs_binary = tf.cast(cls_outputs > threshold, tf.float32)
        c_loss = tf.reduce_mean(classifier_loss(labels, cls_outputs_binary))

        gen_total_loss = gen_loss + c_loss

    gradients_of_generator = gen_tape.gradient(gen_total_loss, generator.trainable_variables)
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

    gen_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
    disc_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))

    return gen_loss, disc_loss, c_loss

# Setup model training
img_shape = (224, 224, 3)
latent_dim = 100

generator = StylexGenerator(latent_dim, img_shape)
discriminator = StylexDiscriminator(img_shape)

gen_optimizer = tf.keras.optimizers.Adam(1e-4)
disc_optimizer = tf.keras.optimizers.Adam(1e-4)


user_model = tf.keras.models.load_model(f'{BASE_DIR}classification_model.keras')
dataset = train_dataset

def generate_and_save_images(generator, epoch, num_examples=16):
    # Generate noise for the input
    noise = tf.random.normal([num_examples, generator.latent_dim])

    # Generate images
    generated_images = generator(noise, training=False)

    # Rescale images to [0, 1]
    generated_images = (generated_images + 1) / 2.0 if generated_images.numpy().min() < 0 else generated_images

    # Plot the generated images
    fig = plt.figure(figsize=(4, 4))

    for i in range(num_examples):
        plt.subplot(4, 4, i+1)
        plt.imshow(generated_images[i])
        plt.axis('off')

    plt.tight_layout()
    plt.savefig(f'{BASE_DIR}generated_images_epoch_{epoch}.png')
    plt.close(fig)

    print(f"Images saved for epoch {epoch}")

epochs = 200
checkpoint_interval = 50

for epoch in range(epochs):
    gen_losses = []
    disc_losses = []
    cls_losses = []


    for image_batch, label_batch in tqdm(dataset, desc=f'Epoch {epoch + 1}/{epochs}', unit='batch'):
        gen_loss, disc_loss, c_loss = train_step(image_batch, label_batch, generator, discriminator, user_model, gen_optimizer, disc_optimizer)
        gen_losses.append(gen_loss.numpy())
        disc_losses.append(disc_loss.numpy())
        cls_losses.append(c_loss.numpy())

    print(f'Epoch {epoch + 1}, Gen Loss: {np.mean(gen_losses):.4f}, Disc Loss: {np.mean(disc_losses):.4f}, Classifier Loss: {np.mean(cls_losses):.4f}')

    # Save checkpoint models
    if (epoch + 1) % checkpoint_interval == 0:
        generator.save(f'{BASE_DIR}stylex_generator_epoch_{epoch+1}.keras')
        discriminator.save(f'{BASE_DIR}stylex_discriminator_epoch_{epoch+1}.keras')

    # Generate and save sample images
    if (epoch + 1) % 10 == 0:
        generate_and_save_images(generator, epoch + 1)

# Save the final models
generator.save(f'{BASE_DIR}stylex_generator.keras')
discriminator.save(f'{BASE_DIR}stylex_discriminator.keras')
Epoch 1/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:15<00:00,  6.24batch/s]
Epoch 1, Gen Loss: 6.9246, Disc Loss: 2.3996, Classifier Loss: 7.8299
Epoch 2/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.55batch/s]
Epoch 2, Gen Loss: 4.7816, Disc Loss: 0.1034, Classifier Loss: 7.8259
Epoch 3/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.57batch/s]
Epoch 3, Gen Loss: 4.0024, Disc Loss: 0.2772, Classifier Loss: 7.8259
Epoch 4/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.69batch/s]
Epoch 4, Gen Loss: 3.0701, Disc Loss: 0.2535, Classifier Loss: 7.8259
Epoch 5/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.56batch/s]
Epoch 5, Gen Loss: 3.3359, Disc Loss: 0.4315, Classifier Loss: 7.9816
Epoch 6/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.55batch/s]
Epoch 6, Gen Loss: 3.3431, Disc Loss: 0.6335, Classifier Loss: 7.9816
Epoch 7/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.50batch/s]
Epoch 7, Gen Loss: 5.0520, Disc Loss: 1.9853, Classifier Loss: 7.8259
Epoch 8/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.71batch/s]
Epoch 8, Gen Loss: 8.5900, Disc Loss: 3.1198, Classifier Loss: 7.8259
Epoch 9/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 9, Gen Loss: 8.1930, Disc Loss: 3.6880, Classifier Loss: 7.8259
Epoch 10/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.90batch/s]
Epoch 10, Gen Loss: 8.7014, Disc Loss: 4.8925, Classifier Loss: 7.9816
Images saved for epoch 10
Epoch 11/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.01batch/s]
Epoch 11, Gen Loss: 9.2877, Disc Loss: 6.2901, Classifier Loss: 7.8259
Epoch 12/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.88batch/s]
Epoch 12, Gen Loss: 11.0121, Disc Loss: 6.0468, Classifier Loss: 7.9816
Epoch 13/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.88batch/s]
Epoch 13, Gen Loss: 8.5837, Disc Loss: 4.9585, Classifier Loss: 7.9816
Epoch 14/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.87batch/s]
Epoch 14, Gen Loss: 9.8736, Disc Loss: 7.4905, Classifier Loss: 7.8259
Epoch 15/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 15, Gen Loss: 11.2318, Disc Loss: 5.7470, Classifier Loss: 7.9816
Epoch 16/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 16, Gen Loss: 10.7809, Disc Loss: 7.5164, Classifier Loss: 7.8259
Epoch 17/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 17, Gen Loss: 7.6938, Disc Loss: 8.9473, Classifier Loss: 7.8575
Epoch 18/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 18, Gen Loss: 10.4359, Disc Loss: 6.3342, Classifier Loss: 7.8469
Epoch 19/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 19, Gen Loss: 9.7200, Disc Loss: 9.0394, Classifier Loss: 7.8165
Epoch 20/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.87batch/s]
Epoch 20, Gen Loss: 9.3371, Disc Loss: 9.1449, Classifier Loss: 7.9712
Images saved for epoch 20
Epoch 21/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.87batch/s]
Epoch 21, Gen Loss: 9.4849, Disc Loss: 9.1427, Classifier Loss: 7.7973
Epoch 22/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.00batch/s]
Epoch 22, Gen Loss: 10.5422, Disc Loss: 11.5907, Classifier Loss: 8.0140
Epoch 23/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 23, Gen Loss: 6.5015, Disc Loss: 7.4523, Classifier Loss: 7.8789
Epoch 24/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.73batch/s]
Epoch 24, Gen Loss: 6.7616, Disc Loss: 7.7622, Classifier Loss: 7.9725
Epoch 25/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 25, Gen Loss: 5.9639, Disc Loss: 6.6228, Classifier Loss: 7.6725
Epoch 26/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 26, Gen Loss: 6.1109, Disc Loss: 7.4821, Classifier Loss: 7.6686
Epoch 27/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.04batch/s]
Epoch 27, Gen Loss: 7.3011, Disc Loss: 7.3340, Classifier Loss: 8.0298
Epoch 28/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.02batch/s]
Epoch 28, Gen Loss: 3.7417, Disc Loss: 16.9546, Classifier Loss: 8.1875
Epoch 29/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 29, Gen Loss: 5.4471, Disc Loss: 4.6786, Classifier Loss: 7.8317
Epoch 30/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.70batch/s]
Epoch 30, Gen Loss: 5.2336, Disc Loss: 4.2490, Classifier Loss: 7.7563
Images saved for epoch 30
Epoch 31/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.69batch/s]
Epoch 31, Gen Loss: 5.1386, Disc Loss: 2.2706, Classifier Loss: 8.0039
Epoch 32/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.95batch/s]
Epoch 32, Gen Loss: 4.6328, Disc Loss: 2.2071, Classifier Loss: 7.7929
Epoch 33/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.98batch/s]
Epoch 33, Gen Loss: 5.8440, Disc Loss: 1.6860, Classifier Loss: 7.8574
Epoch 34/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.01batch/s]
Epoch 34, Gen Loss: 3.8182, Disc Loss: 1.9643, Classifier Loss: 8.1557
Epoch 35/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.77batch/s]
Epoch 35, Gen Loss: 3.2935, Disc Loss: 1.6653, Classifier Loss: 8.0954
Epoch 36/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.74batch/s]
Epoch 36, Gen Loss: 3.4664, Disc Loss: 0.9910, Classifier Loss: 8.2769
Epoch 37/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.98batch/s]
Epoch 37, Gen Loss: 3.9723, Disc Loss: 2.7562, Classifier Loss: 8.0122
Epoch 38/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 38, Gen Loss: 4.1233, Disc Loss: 2.2752, Classifier Loss: 7.9456
Epoch 39/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.02batch/s]
Epoch 39, Gen Loss: 2.9942, Disc Loss: 1.2640, Classifier Loss: 7.5808
Epoch 40/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 40, Gen Loss: 3.2465, Disc Loss: 0.9138, Classifier Loss: 8.1683
Images saved for epoch 40
Epoch 41/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.92batch/s]
Epoch 41, Gen Loss: 3.6942, Disc Loss: 0.7141, Classifier Loss: 7.8712
Epoch 42/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 42, Gen Loss: 2.8876, Disc Loss: 1.0631, Classifier Loss: 8.1635
Epoch 43/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 43, Gen Loss: 3.7888, Disc Loss: 0.5674, Classifier Loss: 8.1369
Epoch 44/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 44, Gen Loss: 3.3489, Disc Loss: 0.4893, Classifier Loss: 8.3754
Epoch 45/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 45, Gen Loss: 2.9260, Disc Loss: 0.7225, Classifier Loss: 8.0370
Epoch 46/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.79batch/s]
Epoch 46, Gen Loss: 2.8254, Disc Loss: 0.6642, Classifier Loss: 8.2565
Epoch 47/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.00batch/s]
Epoch 47, Gen Loss: 3.1805, Disc Loss: 0.6564, Classifier Loss: 8.2563
Epoch 48/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 48, Gen Loss: 3.1038, Disc Loss: 1.3225, Classifier Loss: 8.1385
Epoch 49/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.00batch/s]
Epoch 49, Gen Loss: 3.6305, Disc Loss: 1.0738, Classifier Loss: 8.1164
Epoch 50/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 50, Gen Loss: 3.2043, Disc Loss: 2.1844, Classifier Loss: 8.4523
Images saved for epoch 50
Epoch 51/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.21batch/s]
Epoch 51, Gen Loss: 4.1415, Disc Loss: 2.2900, Classifier Loss: 7.7473
Epoch 52/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.54batch/s]
Epoch 52, Gen Loss: 3.2057, Disc Loss: 2.0719, Classifier Loss: 8.0789
Epoch 53/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 53, Gen Loss: 5.5364, Disc Loss: 3.6639, Classifier Loss: 7.8711
Epoch 54/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.12batch/s]
Epoch 54, Gen Loss: 5.1029, Disc Loss: 3.6459, Classifier Loss: 8.1429
Epoch 55/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 55, Gen Loss: 6.6825, Disc Loss: 5.6273, Classifier Loss: 7.6312
Epoch 56/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.86batch/s]
Epoch 56, Gen Loss: 6.0806, Disc Loss: 3.6673, Classifier Loss: 7.8478
Epoch 57/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.79batch/s]
Epoch 57, Gen Loss: 6.7589, Disc Loss: 4.0263, Classifier Loss: 7.9634
Epoch 58/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.86batch/s]
Epoch 58, Gen Loss: 5.7881, Disc Loss: 2.8749, Classifier Loss: 8.0779
Epoch 59/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.02batch/s]
Epoch 59, Gen Loss: 5.5434, Disc Loss: 2.5095, Classifier Loss: 7.8430
Epoch 60/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.82batch/s]
Epoch 60, Gen Loss: 7.4822, Disc Loss: 3.9837, Classifier Loss: 7.7350
Images saved for epoch 60
Epoch 61/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 61, Gen Loss: 5.2739, Disc Loss: 2.5431, Classifier Loss: 8.2887
Epoch 62/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.73batch/s]
Epoch 62, Gen Loss: 5.0742, Disc Loss: 1.7378, Classifier Loss: 7.8335
Epoch 63/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 63, Gen Loss: 5.8291, Disc Loss: 2.4642, Classifier Loss: 7.9104
Epoch 64/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 64, Gen Loss: 5.4856, Disc Loss: 2.4761, Classifier Loss: 7.9256
Epoch 65/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 65, Gen Loss: 4.7151, Disc Loss: 1.6243, Classifier Loss: 8.1827
Epoch 66/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 66, Gen Loss: 4.4244, Disc Loss: 2.0252, Classifier Loss: 7.9429
Epoch 67/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.76batch/s]
Epoch 67, Gen Loss: 5.8156, Disc Loss: 2.1161, Classifier Loss: 8.0038
Epoch 68/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.79batch/s]
Epoch 68, Gen Loss: 6.8430, Disc Loss: 3.4056, Classifier Loss: 8.0010
Epoch 69/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.74batch/s]
Epoch 69, Gen Loss: 6.3636, Disc Loss: 2.9807, Classifier Loss: 7.9616
Epoch 70/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.00batch/s]
Epoch 70, Gen Loss: 8.2430, Disc Loss: 2.7704, Classifier Loss: 8.1221
Images saved for epoch 70
Epoch 71/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.89batch/s]
Epoch 71, Gen Loss: 6.3690, Disc Loss: 3.5894, Classifier Loss: 8.1986
Epoch 72/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.92batch/s]
Epoch 72, Gen Loss: 7.9865, Disc Loss: 3.8472, Classifier Loss: 7.9025
Epoch 73/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.79batch/s]
Epoch 73, Gen Loss: 9.0546, Disc Loss: 5.9372, Classifier Loss: 8.3258
Epoch 74/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 74, Gen Loss: 5.9788, Disc Loss: 3.4109, Classifier Loss: 7.9045
Epoch 75/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.92batch/s]
Epoch 75, Gen Loss: 8.7749, Disc Loss: 3.6683, Classifier Loss: 8.0507
Epoch 76/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 76, Gen Loss: 4.6639, Disc Loss: 2.1039, Classifier Loss: 7.9365
Epoch 77/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.78batch/s]
Epoch 77, Gen Loss: 5.6633, Disc Loss: 2.6329, Classifier Loss: 8.1607
Epoch 78/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.87batch/s]
Epoch 78, Gen Loss: 5.6366, Disc Loss: 2.0840, Classifier Loss: 8.3361
Epoch 79/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.77batch/s]
Epoch 79, Gen Loss: 6.6380, Disc Loss: 3.2065, Classifier Loss: 8.2011
Epoch 80/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.95batch/s]
Epoch 80, Gen Loss: 5.0773, Disc Loss: 1.6009, Classifier Loss: 8.1840
Images saved for epoch 80
Epoch 81/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.03batch/s]
Epoch 81, Gen Loss: 4.3146, Disc Loss: 1.9309, Classifier Loss: 8.3354
Epoch 82/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 82, Gen Loss: 4.2748, Disc Loss: 1.5678, Classifier Loss: 7.8545
Epoch 83/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.86batch/s]
Epoch 83, Gen Loss: 4.5003, Disc Loss: 2.3841, Classifier Loss: 8.3103
Epoch 84/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 84, Gen Loss: 3.6919, Disc Loss: 2.1068, Classifier Loss: 8.2617
Epoch 85/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.90batch/s]
Epoch 85, Gen Loss: 4.1906, Disc Loss: 1.9656, Classifier Loss: 7.7818
Epoch 86/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 86, Gen Loss: 4.5941, Disc Loss: 2.3875, Classifier Loss: 8.2696
Epoch 87/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 87, Gen Loss: 4.3099, Disc Loss: 2.5171, Classifier Loss: 7.8679
Epoch 88/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.88batch/s]
Epoch 88, Gen Loss: 4.7600, Disc Loss: 3.8957, Classifier Loss: 7.8737
Epoch 89/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 89, Gen Loss: 6.4247, Disc Loss: 3.3876, Classifier Loss: 7.9136
Epoch 90/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 90, Gen Loss: 7.9897, Disc Loss: 4.8438, Classifier Loss: 7.6359
Images saved for epoch 90
Epoch 91/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.07batch/s]
Epoch 91, Gen Loss: 11.1045, Disc Loss: 7.1293, Classifier Loss: 7.9630
Epoch 92/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.03batch/s]
Epoch 92, Gen Loss: 10.4139, Disc Loss: 6.9511, Classifier Loss: 8.1591
Epoch 93/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 93, Gen Loss: 8.2281, Disc Loss: 7.6930, Classifier Loss: 8.0964
Epoch 94/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 94, Gen Loss: 7.3993, Disc Loss: 4.0127, Classifier Loss: 8.5551
Epoch 95/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.78batch/s]
Epoch 95, Gen Loss: 5.1339, Disc Loss: 4.3686, Classifier Loss: 8.0713
Epoch 96/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.92batch/s]
Epoch 96, Gen Loss: 7.3590, Disc Loss: 5.0521, Classifier Loss: 8.3433
Epoch 97/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.05batch/s]
Epoch 97, Gen Loss: 5.5994, Disc Loss: 3.0055, Classifier Loss: 7.9502
Epoch 98/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.00batch/s]
Epoch 98, Gen Loss: 4.1527, Disc Loss: 2.7182, Classifier Loss: 7.9436
Epoch 99/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 99, Gen Loss: 4.8077, Disc Loss: 1.7614, Classifier Loss: 8.4331
Epoch 100/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.72batch/s]
Epoch 100, Gen Loss: 4.5044, Disc Loss: 2.6050, Classifier Loss: 7.7695
Images saved for epoch 100
Epoch 101/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.01batch/s]
Epoch 101, Gen Loss: 6.1949, Disc Loss: 3.1683, Classifier Loss: 8.2571
Epoch 102/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.50batch/s]
Epoch 102, Gen Loss: 4.3593, Disc Loss: 1.7779, Classifier Loss: 8.1849
Epoch 103/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 103, Gen Loss: 4.0825, Disc Loss: 1.8657, Classifier Loss: 7.9264
Epoch 104/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.88batch/s]
Epoch 104, Gen Loss: 4.8915, Disc Loss: 2.3621, Classifier Loss: 8.2910
Epoch 105/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 105, Gen Loss: 4.9276, Disc Loss: 2.4645, Classifier Loss: 8.0803
Epoch 106/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 106, Gen Loss: 3.5413, Disc Loss: 2.9221, Classifier Loss: 8.0287
Epoch 107/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.90batch/s]
Epoch 107, Gen Loss: 4.3095, Disc Loss: 2.3121, Classifier Loss: 7.9814
Epoch 108/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.87batch/s]
Epoch 108, Gen Loss: 5.0843, Disc Loss: 3.8634, Classifier Loss: 8.3085
Epoch 109/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.89batch/s]
Epoch 109, Gen Loss: 5.3498, Disc Loss: 3.3315, Classifier Loss: 8.3893
Epoch 110/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 110, Gen Loss: 4.2341, Disc Loss: 2.7947, Classifier Loss: 8.2325
Images saved for epoch 110
Epoch 111/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.01batch/s]
Epoch 111, Gen Loss: 4.6002, Disc Loss: 1.8842, Classifier Loss: 8.1469
Epoch 112/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 112, Gen Loss: 4.4319, Disc Loss: 2.6300, Classifier Loss: 8.2147
Epoch 113/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 113, Gen Loss: 3.2152, Disc Loss: 1.6956, Classifier Loss: 8.2926
Epoch 114/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.69batch/s]
Epoch 114, Gen Loss: 3.2332, Disc Loss: 2.2438, Classifier Loss: 8.3086
Epoch 115/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.86batch/s]
Epoch 115, Gen Loss: 3.7737, Disc Loss: 1.6670, Classifier Loss: 7.9067
Epoch 116/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 116, Gen Loss: 2.9397, Disc Loss: 1.5539, Classifier Loss: 7.9761
Epoch 117/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 117, Gen Loss: 3.8646, Disc Loss: 2.3529, Classifier Loss: 7.7462
Epoch 118/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.81batch/s]
Epoch 118, Gen Loss: 3.1109, Disc Loss: 2.0428, Classifier Loss: 8.2205
Epoch 119/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.68batch/s]
Epoch 119, Gen Loss: 5.6069, Disc Loss: 3.4226, Classifier Loss: 7.9839
Epoch 120/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 120, Gen Loss: 4.0441, Disc Loss: 2.6590, Classifier Loss: 8.4446
Images saved for epoch 120
Epoch 121/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 121, Gen Loss: 5.3502, Disc Loss: 2.7096, Classifier Loss: 8.1491
Epoch 122/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.02batch/s]
Epoch 122, Gen Loss: 4.7436, Disc Loss: 2.0926, Classifier Loss: 8.2749
Epoch 123/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.95batch/s]
Epoch 123, Gen Loss: 4.6128, Disc Loss: 2.5422, Classifier Loss: 8.0075
Epoch 124/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 124, Gen Loss: 2.9434, Disc Loss: 1.2778, Classifier Loss: 8.0128
Epoch 125/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 125, Gen Loss: 3.2968, Disc Loss: 1.5376, Classifier Loss: 8.0500
Epoch 126/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.92batch/s]
Epoch 126, Gen Loss: 2.9129, Disc Loss: 1.4310, Classifier Loss: 8.2561
Epoch 127/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.95batch/s]
Epoch 127, Gen Loss: 3.1443, Disc Loss: 1.6282, Classifier Loss: 8.0288
Epoch 128/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.92batch/s]
Epoch 128, Gen Loss: 3.3765, Disc Loss: 1.1623, Classifier Loss: 8.2100
Epoch 129/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.77batch/s]
Epoch 129, Gen Loss: 3.8520, Disc Loss: 2.0729, Classifier Loss: 8.1334
Epoch 130/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 130, Gen Loss: 6.0086, Disc Loss: 5.9329, Classifier Loss: 8.0211
Images saved for epoch 130
Epoch 131/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.86batch/s]
Epoch 131, Gen Loss: 3.1124, Disc Loss: 3.1381, Classifier Loss: 8.1134
Epoch 132/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 132, Gen Loss: 4.0476, Disc Loss: 2.5124, Classifier Loss: 8.1150
Epoch 133/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.12batch/s]
Epoch 133, Gen Loss: 3.2586, Disc Loss: 2.0306, Classifier Loss: 7.6143
Epoch 134/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 134, Gen Loss: 2.6255, Disc Loss: 1.5605, Classifier Loss: 8.2423
Epoch 135/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.82batch/s]
Epoch 135, Gen Loss: 3.5749, Disc Loss: 2.1093, Classifier Loss: 7.8689
Epoch 136/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 136, Gen Loss: 4.0551, Disc Loss: 2.6529, Classifier Loss: 7.9000
Epoch 137/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 137, Gen Loss: 6.6034, Disc Loss: 4.3206, Classifier Loss: 7.7501
Epoch 138/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.83batch/s]
Epoch 138, Gen Loss: 6.6949, Disc Loss: 5.5500, Classifier Loss: 7.9874
Epoch 139/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.95batch/s]
Epoch 139, Gen Loss: 9.9595, Disc Loss: 6.7662, Classifier Loss: 7.7161
Epoch 140/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.65batch/s]
Epoch 140, Gen Loss: 8.9149, Disc Loss: 4.8904, Classifier Loss: 7.7122
Images saved for epoch 140
Epoch 141/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 141, Gen Loss: 8.1048, Disc Loss: 4.5799, Classifier Loss: 7.6554
Epoch 142/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.69batch/s]
Epoch 142, Gen Loss: 8.4052, Disc Loss: 4.5937, Classifier Loss: 8.0073
Epoch 143/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 143, Gen Loss: 6.4195, Disc Loss: 2.4807, Classifier Loss: 7.9868
Epoch 144/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.95batch/s]
Epoch 144, Gen Loss: 4.1323, Disc Loss: 1.6460, Classifier Loss: 8.2593
Epoch 145/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.01batch/s]
Epoch 145, Gen Loss: 5.6677, Disc Loss: 1.3677, Classifier Loss: 8.0081
Epoch 146/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.81batch/s]
Epoch 146, Gen Loss: 4.2920, Disc Loss: 1.5095, Classifier Loss: 7.9223
Epoch 147/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.83batch/s]
Epoch 147, Gen Loss: 4.4594, Disc Loss: 1.1692, Classifier Loss: 8.3736
Epoch 148/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.89batch/s]
Epoch 148, Gen Loss: 4.3359, Disc Loss: 1.0079, Classifier Loss: 8.3643
Epoch 149/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.04batch/s]
Epoch 149, Gen Loss: 3.0944, Disc Loss: 0.7707, Classifier Loss: 8.2788
Epoch 150/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.06batch/s]
Epoch 150, Gen Loss: 3.2511, Disc Loss: 0.8198, Classifier Loss: 7.9124
Images saved for epoch 150
Epoch 151/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.37batch/s]
Epoch 151, Gen Loss: 3.4435, Disc Loss: 1.0347, Classifier Loss: 8.3295
Epoch 152/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.63batch/s]
Epoch 152, Gen Loss: 2.6405, Disc Loss: 1.3796, Classifier Loss: 8.2988
Epoch 153/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.74batch/s]
Epoch 153, Gen Loss: 2.9751, Disc Loss: 1.0425, Classifier Loss: 8.0303
Epoch 154/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.03batch/s]
Epoch 154, Gen Loss: 3.3158, Disc Loss: 1.4760, Classifier Loss: 7.9636
Epoch 155/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 155, Gen Loss: 3.2886, Disc Loss: 1.6179, Classifier Loss: 7.9416
Epoch 156/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 156, Gen Loss: 3.4841, Disc Loss: 1.7421, Classifier Loss: 7.8600
Epoch 157/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 157, Gen Loss: 2.9034, Disc Loss: 2.2199, Classifier Loss: 7.8513
Epoch 158/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 158, Gen Loss: 3.5215, Disc Loss: 1.9085, Classifier Loss: 7.9490
Epoch 159/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.98batch/s]
Epoch 159, Gen Loss: 3.3178, Disc Loss: 1.2893, Classifier Loss: 8.1842
Epoch 160/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.77batch/s]
Epoch 160, Gen Loss: 2.7418, Disc Loss: 1.4690, Classifier Loss: 7.8741
Images saved for epoch 160
Epoch 161/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.74batch/s]
Epoch 161, Gen Loss: 3.6668, Disc Loss: 1.7590, Classifier Loss: 7.9203
Epoch 162/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.76batch/s]
Epoch 162, Gen Loss: 3.7214, Disc Loss: 2.0348, Classifier Loss: 8.3451
Epoch 163/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 163, Gen Loss: 3.2852, Disc Loss: 2.5410, Classifier Loss: 8.1008
Epoch 164/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 164, Gen Loss: 4.9013, Disc Loss: 2.6033, Classifier Loss: 8.3120
Epoch 165/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.78batch/s]
Epoch 165, Gen Loss: 4.3924, Disc Loss: 2.2480, Classifier Loss: 8.1562
Epoch 166/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 166, Gen Loss: 3.6308, Disc Loss: 1.4095, Classifier Loss: 8.2171
Epoch 167/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:11<00:00,  8.71batch/s]
Epoch 167, Gen Loss: 3.8812, Disc Loss: 1.3061, Classifier Loss: 8.1779
Epoch 168/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 168, Gen Loss: 3.9862, Disc Loss: 1.6201, Classifier Loss: 7.9934
Epoch 169/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.04batch/s]
Epoch 169, Gen Loss: 3.8182, Disc Loss: 0.7478, Classifier Loss: 8.0025
Epoch 170/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 170, Gen Loss: 3.0285, Disc Loss: 1.0227, Classifier Loss: 7.9804
Images saved for epoch 170
Epoch 171/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 171, Gen Loss: 4.1838, Disc Loss: 1.3926, Classifier Loss: 7.5489
Epoch 172/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 172, Gen Loss: 3.2475, Disc Loss: 1.4497, Classifier Loss: 7.7951
Epoch 173/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 173, Gen Loss: 4.1380, Disc Loss: 1.8105, Classifier Loss: 7.8789
Epoch 174/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.02batch/s]
Epoch 174, Gen Loss: 3.4627, Disc Loss: 2.3572, Classifier Loss: 8.1590
Epoch 175/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 175, Gen Loss: 4.2784, Disc Loss: 2.9038, Classifier Loss: 8.2364
Epoch 176/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.82batch/s]
Epoch 176, Gen Loss: 5.5446, Disc Loss: 3.9193, Classifier Loss: 8.2542
Epoch 177/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 177, Gen Loss: 7.6566, Disc Loss: 5.7329, Classifier Loss: 8.1248
Epoch 178/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 178, Gen Loss: 7.6028, Disc Loss: 4.9357, Classifier Loss: 8.3666
Epoch 179/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 179, Gen Loss: 3.9909, Disc Loss: 3.2502, Classifier Loss: 8.3520
Epoch 180/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.07batch/s]
Epoch 180, Gen Loss: 6.8898, Disc Loss: 3.3367, Classifier Loss: 8.1410
Images saved for epoch 180
Epoch 181/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 181, Gen Loss: 7.0419, Disc Loss: 5.5919, Classifier Loss: 8.1210
Epoch 182/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.85batch/s]
Epoch 182, Gen Loss: 5.7494, Disc Loss: 3.9952, Classifier Loss: 8.0975
Epoch 183/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.91batch/s]
Epoch 183, Gen Loss: 4.8533, Disc Loss: 3.4929, Classifier Loss: 7.7061
Epoch 184/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.73batch/s]
Epoch 184, Gen Loss: 4.1334, Disc Loss: 2.0282, Classifier Loss: 8.4863
Epoch 185/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.96batch/s]
Epoch 185, Gen Loss: 3.2571, Disc Loss: 1.2620, Classifier Loss: 8.1477
Epoch 186/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.88batch/s]
Epoch 186, Gen Loss: 3.6496, Disc Loss: 1.0231, Classifier Loss: 7.9964
Epoch 187/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.75batch/s]
Epoch 187, Gen Loss: 3.6962, Disc Loss: 0.8322, Classifier Loss: 8.2160
Epoch 188/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.80batch/s]
Epoch 188, Gen Loss: 3.3850, Disc Loss: 0.8965, Classifier Loss: 8.1095
Epoch 189/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 189, Gen Loss: 3.1861, Disc Loss: 1.1560, Classifier Loss: 8.1165
Epoch 190/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.99batch/s]
Epoch 190, Gen Loss: 3.4970, Disc Loss: 1.4988, Classifier Loss: 8.0882
Images saved for epoch 190
Epoch 191/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.93batch/s]
Epoch 191, Gen Loss: 3.1431, Disc Loss: 1.2192, Classifier Loss: 8.1368
Epoch 192/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.02batch/s]
Epoch 192, Gen Loss: 3.3637, Disc Loss: 2.4405, Classifier Loss: 7.9469
Epoch 193/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.79batch/s]
Epoch 193, Gen Loss: 3.7947, Disc Loss: 2.7700, Classifier Loss: 8.0673
Epoch 194/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.89batch/s]
Epoch 194, Gen Loss: 5.9686, Disc Loss: 3.0074, Classifier Loss: 8.0861
Epoch 195/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 195, Gen Loss: 3.8971, Disc Loss: 2.0808, Classifier Loss: 7.9681
Epoch 196/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  9.05batch/s]
Epoch 196, Gen Loss: 4.7931, Disc Loss: 1.3231, Classifier Loss: 8.2123
Epoch 197/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.97batch/s]
Epoch 197, Gen Loss: 4.4564, Disc Loss: 1.0725, Classifier Loss: 7.9989
Epoch 198/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.84batch/s]
Epoch 198, Gen Loss: 3.0172, Disc Loss: 0.8137, Classifier Loss: 8.1477
Epoch 199/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.79batch/s]
Epoch 199, Gen Loss: 3.4116, Disc Loss: 1.3384, Classifier Loss: 8.0142
Epoch 200/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 96/96 [00:10<00:00,  8.94batch/s]
Epoch 200, Gen Loss: 4.1969, Disc Loss: 2.5777, Classifier Loss: 7.8935
Images saved for epoch 200

As seen above the generator losses seemed to waiver up and down, this model was trained multiple times and the losses were not decreasing as much as I would have liked. The model was trained for 200 epochs and the losses were still decreasing, so the model could be trained further. GAN models are quite tricky to train and require a lot of tuning to get the right balance between the generator and discriminator. With the limited compute resources at the time, further training and tuning will need to be done in future work.

Train a StyleGAN2 model on all of the adqueate images in this dataset to see if this model can perform better than the previous model. This model will be trained for 200 epochs.

InΒ [Β ]:
labels_data = pd.read_csv(label_path)
labels_data = labels_data[labels_data['quality'] != 'Inadequate']
labels_data.shape
Out[Β ]:
(14279, 34)
InΒ [Β ]:
stylegan2_train, stylegan2_val = train_test_split(labels_data, test_size=0.2, random_state=12)

BATCH_SIZE = 16

# Create datasets
stylegan2_train_dataset = create_dataset(stylegan2_train, batch_size=BATCH_SIZE, shuffle=True, augment=True)
stylegan2_val_dataset = create_dataset(stylegan2_val, batch_size=BATCH_SIZE, shuffle=False, augment=False)

# Inspect the number of batches in the training and validation datasets
print(f"\nNumber of batches in training dataset: {tf.data.experimental.cardinality(stylegan2_train_dataset)}")
print(f"Number of batches in validation dataset: {tf.data.experimental.cardinality(stylegan2_val_dataset)}")

# Inspect the first batch of the training dataset
for images, labels_batch in stylegan2_train_dataset.take(1):
    print(f"\nShape of the image batch: {images.shape}")
    print(f"Shape of the labels batch: {labels_batch.shape}")
    print(f"Sample labels from the first image: {labels_batch[0]}")
Number of batches in training dataset: 714
Number of batches in validation dataset: 179

Shape of the image batch: (16, 224, 224, 3)
Shape of the labels batch: (16, 1)
Sample labels from the first image: [0.]

train the StyleGAN2 model on entre BRSET dataset, not just the diabetic retinopathy images. This will aid in generating synthetic images that are representative of the entire dataset.

StyleGAN2 ImplementationΒΆ

InΒ [Β ]:
# # StyleGAN2 Generator and Discriminator
class AdaIN(layers.Layer):
    def __init__(self, **kwargs):
        super(AdaIN, self).__init__(**kwargs)

    def build(self, input_shape):
        content_shape, style_shape = input_shape
        self.channels = content_shape[-1]
        self.style_scale = self.add_weight(name="style_scale", shape=(style_shape[-1], self.channels), initializer="random_normal")
        self.style_bias = self.add_weight(name="style_bias", shape=(style_shape[-1], self.channels), initializer="random_normal")
        super(AdaIN, self).build(input_shape)

    def call(self, inputs):
        content, style = inputs
        mean, var = tf.nn.moments(content, axes=[1, 2], keepdims=True)
        normalized = (content - mean) / tf.sqrt(var + 1e-8)

        style = tf.expand_dims(style, axis=1)
        style = tf.expand_dims(style, axis=1)

        scale = tf.matmul(style, self.style_scale)
        bias = tf.matmul(style, self.style_bias)

        return scale * normalized + bias

    def get_config(self):
        config = super().get_config()
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# StyleBlock layer which is unique to StyleGAN architecture
class StyleBlock(layers.Layer):
    def __init__(self, filters, kernel_size, **kwargs):
        super(StyleBlock, self).__init__(**kwargs)
        self.filters = filters
        self.kernel_size = kernel_size
        self.conv = layers.Conv2D(filters, kernel_size, padding="same", use_bias=False)
        self.adain = AdaIN()
        self.activation = layers.LeakyReLU(0.2)

    def call(self, inputs):
        x, w = inputs
        x = self.conv(x)
        x = self.adain([x, w])
        return self.activation(x)

    def get_config(self):
        config = super().get_config()
        config.update({
            "filters": self.filters,
            "kernel_size": self.kernel_size
        })
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# Mapping network for StyleGAN2 architecture
class MappingNetwork(keras.Model):
    def __init__(self, latent_dim, n_layers=8, **kwargs):
        super(MappingNetwork, self).__init__(**kwargs)
        self.latent_dim = latent_dim
        self.n_layers = n_layers
        self.layers_list = []
        for _ in range(n_layers):
            self.layers_list.append(layers.Dense(latent_dim, activation='leaky_relu'))
        self.layers_list.append(layers.Dense(latent_dim))

    def call(self, inputs):
        x = inputs
        for layer in self.layers_list:
            x = layer(x)
        return x

    def get_config(self):
        config = super().get_config()
        config.update({
            "latent_dim": self.latent_dim,
            "n_layers": self.n_layers
        })
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# StyleGAN2 Generator model
class StyleGAN2Generator(keras.Model):
    def __init__(self, latent_dim, **kwargs):
        super(StyleGAN2Generator, self).__init__(**kwargs)
        self.latent_dim = latent_dim
        self.mapping = MappingNetwork(latent_dim)

        self.input_dense = layers.Dense(7 * 7 * 512)

        self.conv_blocks = [
            StyleBlock(512, 3),
            StyleBlock(256, 3),
            StyleBlock(128, 3),
            StyleBlock(64, 3),
            StyleBlock(32, 3),
        ]

        self.to_rgb = layers.Conv2D(3, 1, padding="same", activation="tanh")

    def call(self, inputs):
        w = self.mapping(inputs)

        x = self.input_dense(w)
        x = layers.Reshape((7, 7, 512))(x)

        for block in self.conv_blocks:
            x = block([x, w])
            x = layers.UpSampling2D()(x)

        return self.to_rgb(x)

    def get_config(self):
        config = super().get_config()
        config.update({
            "latent_dim": self.latent_dim,
        })
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

class StyleGAN2Discriminator(keras.Model):
    def __init__(self, **kwargs):
        super(StyleGAN2Discriminator, self).__init__(**kwargs)
        self.conv_blocks = [
            layers.Conv2D(64, 3, strides=2, padding="same"),
            layers.Conv2D(128, 3, strides=2, padding="same"),
            layers.Conv2D(256, 3, strides=2, padding="same"),
            layers.Conv2D(512, 3, strides=2, padding="same"),
            layers.Conv2D(512, 3, strides=2, padding="same"),
        ]
        self.flatten = layers.Flatten()
        self.dense1 = layers.Dense(512, activation='leaky_relu')
        self.dense2 = layers.Dense(1)

    def call(self, inputs):
        if isinstance(inputs, tuple):
            x = inputs[0] 
        else:
            x = inputs

        for block in self.conv_blocks:
            x = block(x)
            x = layers.LeakyReLU(0.2)(x)
        x = self.flatten(x)
        x = self.dense1(x)
        return self.dense2(x)

    def get_config(self):
        config = super().get_config()
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# Loss functions
def generator_loss(fake_output):
    return tf.keras.losses.binary_crossentropy(tf.ones_like(fake_output), fake_output)

def discriminator_loss(real_output, fake_output):
    real_loss = tf.keras.losses.binary_crossentropy(tf.ones_like(real_output), real_output)
    fake_loss = tf.keras.losses.binary_crossentropy(tf.zeros_like(fake_output), fake_output)
    return real_loss + fake_loss

# Training step
@tf.function
def train_step(real_images, generator, discriminator, gen_optimizer, disc_optimizer, latent_dim):
    if isinstance(real_images, tuple):
        real_images = real_images[0]  # Take only the images, ignore the labels

    batch_size = tf.shape(real_images)[0]  # Get the actual batch size
    noise = tf.random.normal([batch_size, latent_dim])

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
        generated_images = generator(noise, training=True)

        real_output = discriminator(real_images, training=True)
        fake_output = discriminator(generated_images, training=True)

        gen_loss = generator_loss(fake_output)
        disc_loss = discriminator_loss(real_output, fake_output)

    gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

    gen_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
    disc_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))

    return tf.reduce_mean(gen_loss), tf.reduce_mean(disc_loss)

# Setup and training
latent_dim = 100
batch_size = 16
img_shape = (224, 224, 3)
checkpoint_interval = 50

generator = StyleGAN2Generator(latent_dim)
discriminator = StyleGAN2Discriminator()

# Recompile the models
gen_optimizer = tf.keras.optimizers.Adam(1e-4, beta_1=0.0, beta_2=0.99, epsilon=1e-8)
disc_optimizer = tf.keras.optimizers.Adam(1e-4, beta_1=0.0, beta_2=0.99, epsilon=1e-8)


epochs = 200
for epoch in range(epochs):
    total_gen_loss = 0.0
    total_disc_loss = 0.0
    num_batches = 0

    progress_bar = tqdm(stylegan2_train_dataset, desc=f'Epoch {epoch + 1}/{epochs}')

    for batch in progress_bar:
        gen_loss, disc_loss = train_step(batch, generator, discriminator, gen_optimizer, disc_optimizer, latent_dim)
        total_gen_loss += gen_loss.numpy()
        total_disc_loss += disc_loss.numpy()
        num_batches += 1

        # Update progress bar description with current losses
        progress_bar.set_postfix({
            'Gen Loss': f'{gen_loss.numpy():.4f}',
            'Disc Loss': f'{disc_loss.numpy():.4f}'
        })

    avg_gen_loss = total_gen_loss / num_batches
    avg_disc_loss = total_disc_loss / num_batches

    print(f'\nEpoch {epoch + 1}, Avg Gen Loss: {avg_gen_loss:.4f}, Avg Disc Loss: {avg_disc_loss:.4f}')

    # Save checkpoint models
    if (epoch + 1) % checkpoint_interval == 0:
        generator.save(f'{BASE_DIR}stylegan2_generator_epoch_{epoch+1}.keras')
        discriminator.save(f'{BASE_DIR}stylegan2_discriminator_epoch_{epoch+1}.keras')

    if (epoch + 1) % 10 == 0:
        # Generate and save sample images
        noise = tf.random.normal([1, latent_dim])
        generated_images = generator(noise, training=False)
        # Save the generated image
        plt.imshow(generated_images[0] * 0.5 + 0.5)  # Rescale from [-1, 1] to [0, 1]
        plt.axis('off')
        plt.savefig(f'{BASE_DIR}generated_image_epoch_{epoch+1}.png')
        plt.close()


generator.save(f'{BASE_DIR}stylegan2_generator.keras')
discriminator.save(f'{BASE_DIR}stylegan2_discriminator.keras')
Epoch 1/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:23<00:00,  8.53it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 1, Avg Gen Loss: 11.1012, Avg Disc Loss: 11.2616
Epoch 2/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 2, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 3/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 3, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 4/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 4, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 5/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 5, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 6/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 6, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 7/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 7, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 8/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 8, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 9/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 9, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 10/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 10, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b49507e6140>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 11/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.24it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 11, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 12/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.10it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 12, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 13/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 13, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 14/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 14, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 15/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 15, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 16/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 16, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 17/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 17, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 18/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 18, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 19/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 19, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 20/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 20, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4950637fa0>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 21/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 21, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 22/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 22, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 23/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.24it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 23, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 24/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 24, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 25/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 25, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 26/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 26, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 27/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 27, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 28/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 28, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 29/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 29, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 30/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 30, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b495068a050>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 31/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 31, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 32/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 32, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 33/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 33, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 34/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 34, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 35/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 35, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 36/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 36, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 37/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 37, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 38/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 38, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 39/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 39, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 40/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 40, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b49506ab9a0>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 41/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 41, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 42/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 42, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 43/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.24it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 43, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 44/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 44, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 45/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 45, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 46/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 46, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 47/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 47, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 48/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 48, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 49/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 49, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 50/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 50, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b49590e9510>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 51/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 51, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 52/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 52, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 53/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 53, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 54/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 54, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 55/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 55, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 56/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 56, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 57/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 57, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 58/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.26it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 58, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 59/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 59, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 60/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 60, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b49590fedd0>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 61/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 61, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 62/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 62, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 63/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 63, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 64/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 64, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 65/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 65, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 66/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 66, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 67/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 67, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 68/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.12it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 68, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 69/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.10it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 69, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 70/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 70, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4959184760>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 71/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.13it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 71, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 72/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 72, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 73/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 73, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 74/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 74, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 75/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 75, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 76/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 76, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 77/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 77, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 78/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 78, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 79/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 79, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 80/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 80, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958fba0b0>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 81/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 81, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 82/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 82, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 83/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 83, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 84/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 84, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 85/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 85, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 86/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 86, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 87/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 87, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 88/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 88, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 89/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 89, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 90/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 90, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958fe3a00>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 91/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.13it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 91, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 92/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.25it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 92, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 93/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 93, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 94/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 94, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 95/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 95, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 96/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 96, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 97/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.24it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 97, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 98/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 98, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 99/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 99, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 100/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 100, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4959045450>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 101/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 101, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 102/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 102, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 103/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 103, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 104/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 104, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 105/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 105, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 106/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 106, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 107/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 107, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 108/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 108, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 109/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 109, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 110/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.24it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 110, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b495909ace0>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 111/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 111, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 112/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 112, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 113/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.12it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 113, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 114/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 114, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 115/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 115, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 116/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 116, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 117/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 117, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 118/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 118, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 119/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 119, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 120/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 120, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958f33d90>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 121/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 121, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 122/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 122, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 123/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.10it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 123, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 124/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 124, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 125/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 125, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 126/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 126, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 127/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 127, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 128/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 128, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 129/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 129, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 130/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 130, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958f2dff0>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 131/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 131, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 132/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 132, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 133/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.25it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 133, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 134/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 134, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 135/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 135, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 136/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 136, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 137/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.09it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 137, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 138/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.21it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 138, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 139/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 139, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 140/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 140, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958f77940>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 141/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 141, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 142/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.11it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 142, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 143/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 143, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 144/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 144, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 145/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 145, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 146/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 146, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 147/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 147, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 148/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 148, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 149/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 149, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 150/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.10it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 150, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958e05330>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 151/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.10it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 151, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 152/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 152, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 153/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.11it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 153, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 154/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 154, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 155/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.11it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 155, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 156/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 156, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 157/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 157, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 158/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 158, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 159/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.09it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 159, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 160/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 160, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958e16c20>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 161/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 161, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 162/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 162, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 163/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.12it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 163, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 164/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 164, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 165/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 165, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 166/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 166, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 167/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 167, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 168/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 168, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 169/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 169, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 170/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.13it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 170, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958eabe50>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 171/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 171, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 172/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 172, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 173/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 173, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 174/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 174, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 175/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 175, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 176/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 176, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 177/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 177, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 178/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.16it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 178, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 179/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 179, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 180/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 180, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958cedf60>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 181/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 181, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 182/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 182, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 183/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 183, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 184/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 184, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 185/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.18it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 185, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 186/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 186, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 187/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 187, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 188/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 188, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 189/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.22it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 189, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 190/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.14it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 190, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958d03910>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)
Epoch 191/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.23it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 191, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 192/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:09<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 192, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 193/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 193, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 194/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.15it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 194, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 195/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.20it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 195, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 196/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.17it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 196, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 197/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.13it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 197, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 198/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.19it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 198, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 199/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.12it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 199, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Epoch 200/200: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 714/714 [01:10<00:00, 10.10it/s, Gen Loss=16.1181, Disc Loss=16.1181]
Epoch 200, Avg Gen Loss: 16.1181, Avg Disc Loss: 16.1181
Out[Β ]:
<matplotlib.image.AxesImage at 0x7b4958d95330>
Out[Β ]:
(-0.5, 223.5, 223.5, -0.5)

Extract FeaturesΒΆ

The next step is to extract features from the generated images using a pre-trained classifier models. These features will be used to analyze the attributes that influence the classifier's predictions.

InΒ [Β ]:
# Attribute extraction
def extract_attributes(generator, classifier, num_samples, latent_dim, num_attributes, batch_size=100):
    attributes = []
    for i in range(latent_dim):
        noise = tf.random.normal([num_samples, latent_dim])
        base_imgs = generator(noise)
        base_preds = classifier(base_imgs)

        pred_diffs = []
        for start in range(0, num_samples, batch_size):
            end = min(start + batch_size, num_samples)
            noise_mod = noise[start:end].numpy()
            noise_mod[:, i] += 0.1  # Small perturbation
            mod_imgs = generator(noise_mod)
            mod_preds = classifier(mod_imgs)

            pred_diff = tf.reduce_mean(tf.abs(mod_preds - base_preds[start:end]))
            pred_diffs.append(pred_diff.numpy())

        avg_pred_diff = np.mean(pred_diffs)
        attributes.append((i, avg_pred_diff))

    attributes.sort(key=lambda x: x[1], reverse=True)
    return [attr[0] for attr in attributes[:num_attributes]]

# Visualize the attribute
def visualize_attribute(generator, attribute_idx, latent_dim):
    noise = tf.random.normal([1, latent_dim])
    base_img = generator(noise)

    noise_mod = noise.numpy()
    noise_mod[0, attribute_idx] += 1  # Increase attribute
    mod_img = generator(noise_mod)

    plt.figure(figsize=(10, 5))
    plt.subplot(1, 2, 1)
    plt.imshow(base_img[0].numpy())
    plt.title("Original Image")
    plt.axis('off')

    plt.subplot(1, 2, 2)
    plt.imshow(mod_img[0].numpy())
    plt.title(f"Modified Image (Attribute {attribute_idx})")
    plt.axis('off')

    plt.show()

StyleGAN2 FeaturesΒΆ

The features extracted from the StyleGAN2 generated images will be used to analyze the attributes that influence the classifier's predictions. The features will be visualized and compared to the real images to understand the differences and similarities between the two sets of images.

InΒ [Β ]:
# Load the models
custom_objects = {
    'StyleGAN2Generator': StyleGAN2Generator,
    'StyleGAN2Discriminator': StyleGAN2Discriminator,
    'AdaIN': AdaIN,
    'StyleBlock': StyleBlock,
    'MappingNetwork': MappingNetwork
}

with keras.utils.custom_object_scope(custom_objects):
    loaded_generator = keras.models.load_model(f'{BASE_DIR}stylegan2_generator.keras')
    loaded_discriminator = keras.models.load_model(f'{BASE_DIR}stylegan2_discriminator.keras')

print("Models loaded successfully!")


latent_dim = generator.latent_dim
num_samples = 100
num_attributes = 10
classifier = keras.models.load_model(f'{BASE_DIR}vgg_model.keras')
top_attributes = extract_attributes(generator, classifier, num_samples, latent_dim, num_attributes)

for attr in top_attributes:
    visualize_attribute(generator, attr, latent_dim)
Models loaded successfully!
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image

Stylex featuresΒΆ

The features extracted from the Stylex generated images will be used to analyze the attributes that influence the classifier's predictions. The features will be visualized and compared to the real images to understand the differences and similarities between the two sets of images.

InΒ [Β ]:
# Extract from the Stylex model and show the top attributes
with custom_object_scope({'StylexGenerator': StylexGenerator, 'StylexDiscriminator': StylexDiscriminator}):
    generator = tf.keras.models.load_model(f'{BASE_DIR}stylex_generator.keras')
classifier = tf.keras.models.load_model(f'{BASE_DIR}classification_model.keras')

latent_dim = 100
num_samples = 100
num_attributes = 10

top_attributes = extract_attributes(generator, classifier, num_samples, latent_dim, num_attributes)

for attr in top_attributes:
    visualize_attribute(generator, attr, latent_dim)
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
No description has been provided for this image

ConclusionΒΆ

In this project I learned quite about training deep learning models. I was able to understand how to try to recreate a research paper and use data which is not commonly trained on. The data in this project is under the Physionet license which requires HIPAA compliance and training. The data is not commonly used in research and is not easily accessible. I was able to train a classifier on the data and generate synthetic images using a GAN model. The GAN model was not able to generate images that were as good as the real images, but with more training and tuning it could be possible. GAN models are probably one of the more difficult models to train and require a lot of tuning to get the right balance between the generator and discriminator. With more compute credits in the future, I plan to fine tune this project for better results, and to hopefully help the research community by possibly publishing a paper on this work on the dataset. I hope to find a novel idea to improve the model and generate better synthetic images.

The classifier was able to classify the images with a good accuracy, but the validation accuracy was not as high as I would have liked. This could be due to the small dataset and the unique features of the images. Overall this project was a great learning experience and I hope to continue working on it in the future.

I hope you were able to learn something from this project and I hope you enjoyed reading it. Thank you for reading!